[jira] [Updated] (HDFS-9666) Enable hdfs-client to read even remote SSD/RAM prior to local disk replica to improve random read

2018-03-01 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated HDFS-9666:

Attachment: HDFS-9666.003.patch

> Enable hdfs-client to read even remote SSD/RAM prior to local disk replica to 
> improve random read
> -
>
> Key: HDFS-9666
> URL: https://issues.apache.org/jira/browse/HDFS-9666
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.6.0, 2.7.0
>Reporter: ade
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: HDFS-9666.0.patch, HDFS-9666.001.patch, 
> HDFS-9666.002.patch, HDFS-9666.003.patch
>
>
> We want to improve random read performance of HDFS for HBase, so enabled the 
> heterogeneous storage in our cluster. But there are only ~50% of datanode & 
> regionserver hosts with SSD. we can set hfile with only ONE_SSD not ALL_SSD 
> storagepolicy and the regionserver on none-SSD host can only read the local 
> disk replica . So we developed this feature in hdfs client to read even 
> remote SSD/RAM prior to local disk replica.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13218) Log audit event only used last EC policy name when add multiple policies from file

2018-03-01 Thread liaoyuxiangqin (JIRA)
liaoyuxiangqin created HDFS-13218:
-

 Summary: Log audit event only used last EC policy name when add 
multiple policies from file 
 Key: HDFS-13218
 URL: https://issues.apache.org/jira/browse/HDFS-13218
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: erasure-coding
Affects Versions: 3.1.0
Reporter: liaoyuxiangqin


When i read the addErasureCodingPolicies() of FSNamesystem class in namenode, i 
found the following code only used last ec policy name for  logAuditEvent, i 
think this audit log can't track whole policies for the add multiple erasure 
coding policies to the ErasureCodingPolicyManager. Thanks.
{code:java|title=FSNamesystem.java|borderStyle=solid}
try {
  checkOperation(OperationCategory.WRITE);
  checkNameNodeSafeMode("Cannot add erasure coding policy");
  for (ErasureCodingPolicy policy : policies) {
try {
  ErasureCodingPolicy newPolicy =
  FSDirErasureCodingOp.addErasureCodingPolicy(this, policy,
  logRetryCache);
  addECPolicyName = newPolicy.getName();
  responses.add(new AddErasureCodingPolicyResponse(newPolicy));
} catch (HadoopIllegalArgumentException e) {
  responses.add(new AddErasureCodingPolicyResponse(policy, e));
}
  }
  success = true;
  return responses.toArray(new AddErasureCodingPolicyResponse[0]);
} finally {
  writeUnlock(operationName);
  if (success) {
getEditLog().logSync();
  }
  logAuditEvent(success, operationName,addECPolicyName, null, null);
}

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13217) Log audit event only used last EC policy name when add multiple policies from file

2018-03-01 Thread liaoyuxiangqin (JIRA)
liaoyuxiangqin created HDFS-13217:
-

 Summary: Log audit event only used last EC policy name when add 
multiple policies from file 
 Key: HDFS-13217
 URL: https://issues.apache.org/jira/browse/HDFS-13217
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: erasure-coding
Affects Versions: 3.1.0
Reporter: liaoyuxiangqin


When i read the addErasureCodingPolicies() of FSNamesystem class in namenode, i 
found the following code only used last ec policy name for  logAuditEvent, i 
think this audit log can't track whole policies for the add multiple erasure 
coding policies to the ErasureCodingPolicyManager. Thanks.
{code:java|title=FSNamesystem.java|borderStyle=solid}
try {
  checkOperation(OperationCategory.WRITE);
  checkNameNodeSafeMode("Cannot add erasure coding policy");
  for (ErasureCodingPolicy policy : policies) {
try {
  ErasureCodingPolicy newPolicy =
  FSDirErasureCodingOp.addErasureCodingPolicy(this, policy,
  logRetryCache);
  addECPolicyName = newPolicy.getName();
  responses.add(new AddErasureCodingPolicyResponse(newPolicy));
} catch (HadoopIllegalArgumentException e) {
  responses.add(new AddErasureCodingPolicyResponse(policy, e));
}
  }
  success = true;
  return responses.toArray(new AddErasureCodingPolicyResponse[0]);
} finally {
  writeUnlock(operationName);
  if (success) {
getEditLog().logSync();
  }
  logAuditEvent(success, operationName,addECPolicyName, null, null);
}

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13212) RBF: Fix router location cache issue

2018-03-01 Thread Weiwei Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383260#comment-16383260
 ] 

Weiwei Wu edited comment on HDFS-13212 at 3/2/18 6:51 AM:
--

[~elgoiri] [~linyiqun] Upload [^HDFS-13212-002.patch]with unit test.

please review, thanks.:)


was (Author: wuweiwei):
[~elgoiri] [~linyiqun] Upload a new patch with unit test, please review, 
thanks.:)

> RBF: Fix router location cache issue
> 
>
> Key: HDFS-13212
> URL: https://issues.apache.org/jira/browse/HDFS-13212
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Reporter: Weiwei Wu
>Priority: Major
> Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch
>
>
> The MountTableResolver refreshEntries function have a bug when add a new 
> mount table entry which already have location cache. The old location cache 
> will never be invalid until this mount point change again.
> Need to invalid the location cache when add the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13212) RBF: Fix router location cache issue

2018-03-01 Thread Weiwei Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383260#comment-16383260
 ] 

Weiwei Wu commented on HDFS-13212:
--

[~elgoiri] [~linyiqun] Upload a new patch with unit test, please review, 
thanks.:)

> RBF: Fix router location cache issue
> 
>
> Key: HDFS-13212
> URL: https://issues.apache.org/jira/browse/HDFS-13212
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Reporter: Weiwei Wu
>Priority: Major
> Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch
>
>
> The MountTableResolver refreshEntries function have a bug when add a new 
> mount table entry which already have location cache. The old location cache 
> will never be invalid until this mount point change again.
> Need to invalid the location cache when add the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13212) RBF: Fix router location cache issue

2018-03-01 Thread Weiwei Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Wu updated HDFS-13212:
-
Attachment: HDFS-13212-002.patch

> RBF: Fix router location cache issue
> 
>
> Key: HDFS-13212
> URL: https://issues.apache.org/jira/browse/HDFS-13212
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Reporter: Weiwei Wu
>Priority: Major
> Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch
>
>
> The MountTableResolver refreshEntries function have a bug when add a new 
> mount table entry which already have location cache. The old location cache 
> will never be invalid until this mount point change again.
> Need to invalid the location cache when add the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13214) RBF: Configuration on Router conflicts with client side configuration

2018-03-01 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383236#comment-16383236
 ] 

Tao Jie commented on HDFS-13214:


[~elgoiri] [~ywskycn] [~linyiqun] thank you for your response.
{quote}
In our internal setup, we configure dfs.nameservice.id.
{quote}
 I have made brief test about HA/non-HA configuration, once we don't specify 
{{dfs.nameservice.id}}, the Exception occurs no matter whether HA is enabled. 
So I don't think HA/non-HA mode is directly related to this issue.
In current code logic, we try to find the local namenode host from the 
configuration. So I think we should set {{dfs.nameservice.id}} to {{ns1}} or 
{{ns2}} rather than {{ns-fed}}. Otherwise the Router would think itself as the 
local namenode by mistake.
Today property {{dfs.nameservice.id}} is not a necessary one in a federation 
cluster (HA or non-HA), right?
1, We can complete the document and ensure {{dfs.nameservice.id}} must be 
specified on Router node.
2, Improve the logic of finding the local namenode address in case of 
{{dfs.nameservice.id}} not be specified.
Please correct me if I am wrong.

> RBF: Configuration on Router conflicts with client side configuration
> -
>
> Key: HDFS-13214
> URL: https://issues.apache.org/jira/browse/HDFS-13214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Tao Jie
>Priority: Major
>
> In a typical router-based federation cluster, hdfs-site.xml is supposed to be:
> {code}
> 
> dfs.nameservices
> ns1,ns2,ns-fed
>   
>   
> dfs.ha.namenodes.ns-fed
> r1,r2
>   
>   
> dfs.namenode.rpc-address.ns1
> host1:8020
>   
>   
> dfs.namenode.rpc-address.ns2
> host2:8020
>   
>   
> dfs.namenode.rpc-address.ns-fed.r1
> host1:
>   
>   
> dfs.namenode.rpc-address.ns-fed.r2
> host2:
>   
> {code}
> {{dfs.ha.namenodes.ns-fed}} here is used for client to access the Router. 
> However with this configuration on server node, Router fails to start with 
> error:
> {code}
> org.apache.hadoop.HadoopIllegalArgumentException: Configuration has multiple 
> addresses that match local node's address. Please configure the system with 
> dfs.nameservice.id and dfs.ha.namenode.id
> at org.apache.hadoop.hdfs.DFSUtil.getSuffixIDs(DFSUtil.java:1198)
> at org.apache.hadoop.hdfs.DFSUtil.getNameServiceId(DFSUtil.java:1131)
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNamenodeNameServiceId(DFSUtil.java:1086)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createLocalNamenodeHearbeatService(Router.java:466)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createNamenodeHearbeatServices(Router.java:423)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.serviceInit(Router.java:199)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter.main(DFSRouter.java:69)
> 2018-03-01 18:05:56,208 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter: Failed to start 
> router
> {code}
> Then the router tries to find the local namenode, multiple properties: 
> {{dfs.namenode.rpc-address.ns1}}, {{dfs.namenode.rpc-address.ns-fed.r1}} 
> match the local address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13215) RBF: Move Router to its own module

2018-03-01 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383206#comment-16383206
 ] 

maobaolong commented on HDFS-13215:
---

+1, good idea. 

> RBF: Move Router to its own module
> --
>
> Key: HDFS-13215
> URL: https://issues.apache.org/jira/browse/HDFS-13215
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Major
>
> We are splitting the HDFS client code base and potentially Router-based 
> Federation is also independent enough to be in its own package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13197) Ozone: Fix ConfServlet#getOzoneTags cmd after HADOOP-15007

2018-03-01 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383195#comment-16383195
 ] 

genericqa commented on HDFS-13197:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
12s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
13s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
17s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
31s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
18s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 59s{color} | {color:orange} root: The patch generated 22 new + 140 unchanged 
- 0 fixed = 162 total (was 140) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
13s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
54s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}144m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}260m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.cblock.TestCBlockReadWrite |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-13197 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-03-01 Thread Dennis Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383178#comment-16383178
 ] 

Dennis Huo commented on HDFS-13056:
---

Added [^HDFS-13056.005.patch] to fix TestHdfsConfigFields (and also merge 
trunk). 

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.002.patch, HDFS-13056-branch-2.8.poc1.patch, 
> HDFS-13056.001.patch, HDFS-13056.002.patch, HDFS-13056.003.patch, 
> HDFS-13056.003.patch, HDFS-13056.004.patch, HDFS-13056.005.patch, 
> Reference_only_zhen_PPOC_hadoop2.6.X.diff, hdfs-file-composite-crc32-v1.pdf, 
> hdfs-file-composite-crc32-v2.pdf, hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-03-01 Thread Dennis Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Huo updated HDFS-13056:
--
Status: Patch Available  (was: Open)

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.002.patch, HDFS-13056-branch-2.8.poc1.patch, 
> HDFS-13056.001.patch, HDFS-13056.002.patch, HDFS-13056.003.patch, 
> HDFS-13056.003.patch, HDFS-13056.004.patch, HDFS-13056.005.patch, 
> Reference_only_zhen_PPOC_hadoop2.6.X.diff, hdfs-file-composite-crc32-v1.pdf, 
> hdfs-file-composite-crc32-v2.pdf, hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-03-01 Thread Dennis Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Huo updated HDFS-13056:
--
Status: Open  (was: Patch Available)

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.002.patch, HDFS-13056-branch-2.8.poc1.patch, 
> HDFS-13056.001.patch, HDFS-13056.002.patch, HDFS-13056.003.patch, 
> HDFS-13056.003.patch, HDFS-13056.004.patch, HDFS-13056.005.patch, 
> Reference_only_zhen_PPOC_hadoop2.6.X.diff, hdfs-file-composite-crc32-v1.pdf, 
> hdfs-file-composite-crc32-v2.pdf, hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-03-01 Thread Dennis Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Huo updated HDFS-13056:
--
Attachment: HDFS-13056.005.patch

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.002.patch, HDFS-13056-branch-2.8.poc1.patch, 
> HDFS-13056.001.patch, HDFS-13056.002.patch, HDFS-13056.003.patch, 
> HDFS-13056.003.patch, HDFS-13056.004.patch, HDFS-13056.005.patch, 
> Reference_only_zhen_PPOC_hadoop2.6.X.diff, hdfs-file-composite-crc32-v1.pdf, 
> hdfs-file-composite-crc32-v2.pdf, hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9666) Enable hdfs-client to read even remote SSD/RAM prior to local disk replica to improve random read

2018-03-01 Thread Jiandan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383168#comment-16383168
 ] 

Jiandan Yang  commented on HDFS-9666:
-

fix compiler error and upload v2 patch

> Enable hdfs-client to read even remote SSD/RAM prior to local disk replica to 
> improve random read
> -
>
> Key: HDFS-9666
> URL: https://issues.apache.org/jira/browse/HDFS-9666
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.6.0, 2.7.0
>Reporter: ade
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: HDFS-9666.0.patch, HDFS-9666.001.patch, 
> HDFS-9666.002.patch
>
>
> We want to improve random read performance of HDFS for HBase, so enabled the 
> heterogeneous storage in our cluster. But there are only ~50% of datanode & 
> regionserver hosts with SSD. we can set hfile with only ONE_SSD not ALL_SSD 
> storagepolicy and the regionserver on none-SSD host can only read the local 
> disk replica . So we developed this feature in hdfs client to read even 
> remote SSD/RAM prior to local disk replica.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9666) Enable hdfs-client to read even remote SSD/RAM prior to local disk replica to improve random read

2018-03-01 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated HDFS-9666:

Attachment: HDFS-9666.002.patch

> Enable hdfs-client to read even remote SSD/RAM prior to local disk replica to 
> improve random read
> -
>
> Key: HDFS-9666
> URL: https://issues.apache.org/jira/browse/HDFS-9666
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.6.0, 2.7.0
>Reporter: ade
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: HDFS-9666.0.patch, HDFS-9666.001.patch, 
> HDFS-9666.002.patch
>
>
> We want to improve random read performance of HDFS for HBase, so enabled the 
> heterogeneous storage in our cluster. But there are only ~50% of datanode & 
> regionserver hosts with SSD. we can set hfile with only ONE_SSD not ALL_SSD 
> storagepolicy and the regionserver on none-SSD host can only read the local 
> disk replica . So we developed this feature in hdfs client to read even 
> remote SSD/RAM prior to local disk replica.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-03-01 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383166#comment-16383166
 ] 

genericqa commented on HDFS-13056:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m 
51s{color} | {color:red} Docker failed to build yetus/hadoop:c2d96dd. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13056 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912719/HDFS-13056-branch-2.8.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23263/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.002.patch, HDFS-13056-branch-2.8.poc1.patch, 
> HDFS-13056.001.patch, HDFS-13056.002.patch, HDFS-13056.003.patch, 
> HDFS-13056.003.patch, HDFS-13056.004.patch, 
> Reference_only_zhen_PPOC_hadoop2.6.X.diff, hdfs-file-composite-crc32-v1.pdf, 
> hdfs-file-composite-crc32-v2.pdf, hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-03-01 Thread Dennis Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383162#comment-16383162
 ] 

Dennis Huo commented on HDFS-13056:
---

Also added a backport to branch-2.8 which I was able to successfully test 
end-to-end in a real distributed cluster [^HDFS-13056-branch-2.8.002.patch]

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.002.patch, HDFS-13056-branch-2.8.poc1.patch, 
> HDFS-13056.001.patch, HDFS-13056.002.patch, HDFS-13056.003.patch, 
> HDFS-13056.003.patch, HDFS-13056.004.patch, 
> Reference_only_zhen_PPOC_hadoop2.6.X.diff, hdfs-file-composite-crc32-v1.pdf, 
> hdfs-file-composite-crc32-v2.pdf, hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-03-01 Thread Dennis Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Huo updated HDFS-13056:
--
Attachment: HDFS-13056-branch-2.8.002.patch

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.002.patch, HDFS-13056-branch-2.8.poc1.patch, 
> HDFS-13056.001.patch, HDFS-13056.002.patch, HDFS-13056.003.patch, 
> HDFS-13056.003.patch, HDFS-13056.004.patch, 
> Reference_only_zhen_PPOC_hadoop2.6.X.diff, hdfs-file-composite-crc32-v1.pdf, 
> hdfs-file-composite-crc32-v2.pdf, hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-03-01 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383143#comment-16383143
 ] 

genericqa commented on HDFS-13056:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
15s{color} | {color:green} root: The patch generated 0 new + 609 unchanged - 1 
fixed = 609 total (was 610) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
17s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
35s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 43s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}202m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.TestHdfsConfigFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13056 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912701/HDFS-13056.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 3947910dbb4b 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 

[jira] [Commented] (HDFS-9666) Enable hdfs-client to read even remote SSD/RAM prior to local disk replica to improve random read

2018-03-01 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383136#comment-16383136
 ] 

genericqa commented on HDFS-9666:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
58s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
49s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 28s{color} 
| {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-hdfs-project: The patch generated 9 new + 
66 unchanged - 0 fixed = 75 total (was 66) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
49s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
33s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 39s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-9666 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912711/HDFS-9666.001.patch |
| Optional Tests |  

[jira] [Commented] (HDFS-13210) Fix the typo in MiniDFSCluster class

2018-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383126#comment-16383126
 ] 

Hudson commented on HDFS-13210:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #13756 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13756/])
HDFS-13210. Fix the typo in MiniDFSCluster class. Contributed by fang (yqlin: 
rev 55669515f626eb5b1f3ba25095f3e306c243d899)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java


> Fix the typo in MiniDFSCluster class 
> -
>
> Key: HDFS-13210
> URL: https://issues.apache.org/jira/browse/HDFS-13210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: fang zhenyi
>Priority: Trivial
> Fix For: 3.2.0
>
> Attachments: HDFS-13210.001.patch, HDFS-13210.002.patch, 
> HDFS-13210.003.patch
>
>
> There is a typo {{SimilatedFSDataset}} in {{MiniDFSCluster#injectBlocks}}.
>  In line2748 and line2769:
> {code:java}
> public void injectBlocks(int dataNodeIndex,
>   Iterable blocksToInject, String bpid) throws IOException {
> if (dataNodeIndex < 0 || dataNodeIndex > dataNodes.size()) {
>   throw new IndexOutOfBoundsException();
> }
> final DataNode dn = dataNodes.get(dataNodeIndex).datanode;
> final FsDatasetSpi dataSet = DataNodeTestUtils.getFSDataset(dn);
> if (!(dataSet instanceof SimulatedFSDataset)) {
>   throw new IOException("injectBlocks is valid only for 
> SimilatedFSDataset");
> }
> ...
> }
> {code}
> {{SimilatedFSDataset}} should be {{SimulatedFSDataset}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13210) Fix the typo in MiniDFSCluster class

2018-03-01 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13210:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks [~zhenyi] for the contribution!

> Fix the typo in MiniDFSCluster class 
> -
>
> Key: HDFS-13210
> URL: https://issues.apache.org/jira/browse/HDFS-13210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: fang zhenyi
>Priority: Trivial
> Fix For: 3.2.0
>
> Attachments: HDFS-13210.001.patch, HDFS-13210.002.patch, 
> HDFS-13210.003.patch
>
>
> There is a typo {{SimilatedFSDataset}} in {{MiniDFSCluster#injectBlocks}}.
>  In line2748 and line2769:
> {code:java}
> public void injectBlocks(int dataNodeIndex,
>   Iterable blocksToInject, String bpid) throws IOException {
> if (dataNodeIndex < 0 || dataNodeIndex > dataNodes.size()) {
>   throw new IndexOutOfBoundsException();
> }
> final DataNode dn = dataNodes.get(dataNodeIndex).datanode;
> final FsDatasetSpi dataSet = DataNodeTestUtils.getFSDataset(dn);
> if (!(dataSet instanceof SimulatedFSDataset)) {
>   throw new IOException("injectBlocks is valid only for 
> SimilatedFSDataset");
> }
> ...
> }
> {code}
> {{SimilatedFSDataset}} should be {{SimulatedFSDataset}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13210) Fix the typo in MiniDFSCluster class

2018-03-01 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383125#comment-16383125
 ] 

Yiqun Lin commented on HDFS-13210:
--

+1.

> Fix the typo in MiniDFSCluster class 
> -
>
> Key: HDFS-13210
> URL: https://issues.apache.org/jira/browse/HDFS-13210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: fang zhenyi
>Priority: Trivial
> Attachments: HDFS-13210.001.patch, HDFS-13210.002.patch, 
> HDFS-13210.003.patch
>
>
> There is a typo {{SimilatedFSDataset}} in {{MiniDFSCluster#injectBlocks}}.
>  In line2748 and line2769:
> {code:java}
> public void injectBlocks(int dataNodeIndex,
>   Iterable blocksToInject, String bpid) throws IOException {
> if (dataNodeIndex < 0 || dataNodeIndex > dataNodes.size()) {
>   throw new IndexOutOfBoundsException();
> }
> final DataNode dn = dataNodes.get(dataNodeIndex).datanode;
> final FsDatasetSpi dataSet = DataNodeTestUtils.getFSDataset(dn);
> if (!(dataSet instanceof SimulatedFSDataset)) {
>   throw new IOException("injectBlocks is valid only for 
> SimilatedFSDataset");
> }
> ...
> }
> {code}
> {{SimilatedFSDataset}} should be {{SimulatedFSDataset}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13210) Fix the typo in MiniDFSCluster class

2018-03-01 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383120#comment-16383120
 ] 

genericqa commented on HDFS-13210:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 201 unchanged - 2 fixed = 201 total (was 203) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}111m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}163m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.TestReconstructStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13210 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912702/HDFS-13210.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6a976a22fdf8 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 923e177 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23257/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23257/testReport/ |
| Max. process+thread count | 3112 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console 

[jira] [Commented] (HDFS-13183) Standby NameNode process getBlocks request to reduce Active load

2018-03-01 Thread He Xiaoqiao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383114#comment-16383114
 ] 

He Xiaoqiao commented on HDFS-13183:


[~xkrogen]
{quote}if the SbNN goes down, the ANN is not aware of this, but the balancer 
should start to read from the ANN instead of SbNN.{quote}
v003 can not process this situation indeed, and i think it is better if client 
is able to make decision to request the proper namenode which may need to 
refactor {{NameNodeConnector}}, and I review the target of HDFS-12976, maybe we 
need wait for finishing.
Thanks again for your detailed code reviewed. [~xkrogen]

> Standby NameNode process getBlocks request to reduce Active load
> 
>
> Key: HDFS-13183
> URL: https://issues.apache.org/jira/browse/HDFS-13183
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover, namenode
>Affects Versions: 2.7.5, 3.1.0, 2.9.1, 2.8.4, 3.0.2
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-13183-trunk.001.patch, HDFS-13183-trunk.002.patch, 
> HDFS-13183-trunk.003.patch
>
>
> The performance of Active NameNode could be impact when {{Balancer}} requests 
> #getBlocks, since query blocks of overly full DNs performance is extremely 
> inefficient currently. The main reason is {{NameNodeRpcServer#getBlocks}} 
> hold read lock for long time. In extreme case, all handlers of Active 
> NameNode RPC server are occupied by one reader 
> {{NameNodeRpcServer#getBlocks}} and other write operation calls, thus Active 
> NameNode enter a state of false death for number of seconds even for minutes.
> The similar performance concerns of Balancer have reported by HDFS-9412, 
> HDFS-7967, etc.
> If Standby NameNode can shoulder #getBlocks heavy burden, it could speed up 
> the progress of balancing and reduce performance impact to Active NameNode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12749) DN may not send block report to NN after NN restart

2018-03-01 Thread TanYuxin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383110#comment-16383110
 ] 

TanYuxin commented on HDFS-12749:
-

Thanks [~hexiaoqiao]  [~xkrogen] very much for reviewing and resolving the 
issue. I think v003 patch committed by [~hexiaoqiao] is more effective and 
simpler. 
Anyone mind having a review?  In our production cluster, the problem has been 
fixed for months by the proposed patch.

> DN may not send block report to NN after NN restart
> ---
>
> Key: HDFS-12749
> URL: https://issues.apache.org/jira/browse/HDFS-12749
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1, 2.8.3, 2.7.5, 3.0.0, 2.9.1
>Reporter: TanYuxin
>Priority: Major
> Attachments: HDFS-12749-branch-2.7.002.patch, 
> HDFS-12749-trunk.003.patch, HDFS-12749.001.patch
>
>
> Now our cluster have thousands of DN, millions of files and blocks. When NN 
> restart, NN's load is very high.
> After NN restart,DN will call BPServiceActor#reRegister method to register. 
> But register RPC will get a IOException since NN is busy dealing with Block 
> Report.  The exception is caught at BPServiceActor#processCommand.
> Next is the caught IOException:
> {code:java}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Error processing 
> datanode Command
> java.io.IOException: Failed on local exception: java.io.IOException: 
> java.net.SocketTimeoutException: 6 millis timeout while waiting for 
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
> local=/DataNode_IP:Port remote=NameNode_Host/IP:Port]; Host Details : local 
> host is: "DataNode_Host/Datanode_IP"; destination host is: 
> "NameNode_Host":Port;
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:773)
> at org.apache.hadoop.ipc.Client.call(Client.java:1474)
> at org.apache.hadoop.ipc.Client.call(Client.java:1407)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy13.registerDatanode(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.registerDatanode(DatanodeProtocolClientSideTranslatorPB.java:126)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.register(BPServiceActor.java:793)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.reRegister(BPServiceActor.java:926)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:604)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:898)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:711)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:864)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The un-catched IOException breaks BPServiceActor#register, and the Block 
> Report can not be sent immediately. 
> {code}
>   /**
>* Register one bp with the corresponding NameNode
>* 
>* The bpDatanode needs to register with the namenode on startup in order
>* 1) to report which storage it is serving now and 
>* 2) to receive a registrationID
>*  
>* issued by the namenode to recognize registered datanodes.
>* 
>* @param nsInfo current NamespaceInfo
>* @see FSNamesystem#registerDatanode(DatanodeRegistration)
>* @throws IOException
>*/
>   void register(NamespaceInfo nsInfo) throws IOException {
> // The handshake() phase loaded the block pool storage
> // off disk - so update the bpRegistration object from that info
> DatanodeRegistration newBpRegistration = bpos.createRegistration();
> LOG.info(this + " beginning handshake with NN");
> while (shouldRun()) {
>   try {
> // Use returned registration from namenode with updated fields
> newBpRegistration = bpNamenode.registerDatanode(newBpRegistration);
> newBpRegistration.setNamespaceInfo(nsInfo);
> bpRegistration = newBpRegistration;
> break;
>   } catch(EOFException e) {  // namenode might have just restarted
> LOG.info("Problem connecting to server: " + nnAddr + " :"
> + e.getLocalizedMessage());
> sleepAndLogInterrupts(1000, "connecting to server");
>   } catch(SocketTimeoutException e) {  // namenode is busy
> LOG.info("Problem connecting to server: " + nnAddr);
> sleepAndLogInterrupts(1000, "connecting to server");
>   }
> }
> 
> LOG.info("Block pool " + this + " successfully registered with NN");
> bpos.registrationSucceeded(this, bpRegistration);
> // random short 

[jira] [Resolved] (HDFS-13216) HDFS

2018-03-01 Thread mster (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mster resolved HDFS-13216.
--
Resolution: Fixed

> HDFS
> 
>
> Key: HDFS-13216
> URL: https://issues.apache.org/jira/browse/HDFS-13216
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: mster
>Priority: Trivial
>  Labels: INodesInPath
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13216) HDFS

2018-03-01 Thread mster (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mster updated HDFS-13216:
-
  Priority: Trivial  (was: Major)
Issue Type: Task  (was: Bug)

> HDFS
> 
>
> Key: HDFS-13216
> URL: https://issues.apache.org/jira/browse/HDFS-13216
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: mster
>Priority: Trivial
>  Labels: INodesInPath
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13216) HDFS

2018-03-01 Thread mster (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mster updated HDFS-13216:
-
Affects Version/s: (was: 3.0.0)
  Description: (was: if (!isRef && isDir && dir.isWithSnapshot()) {

} else if (isRef && isDir && !lastComp) {

// we should get the snapshotID of INodeReference  in path but we cant enter 
this branch

 })
  Component/s: (was: snapshots)
   (was: hdfs)

> HDFS
> 
>
> Key: HDFS-13216
> URL: https://issues.apache.org/jira/browse/HDFS-13216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: mster
>Priority: Major
>  Labels: INodesInPath
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12641) Backport HDFS-11755 into branch-2.7 to fix a regression in HDFS-11445

2018-03-01 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383085#comment-16383085
 ] 

genericqa commented on HDFS-12641:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 10m 
41s{color} | {color:red} Docker failed to build yetus/hadoop:ea57d10. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12641 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12892157/HDFS-12641.branch-2.7.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23261/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Backport HDFS-11755 into branch-2.7 to fix a regression in HDFS-11445
> -
>
> Key: HDFS-12641
> URL: https://issues.apache.org/jira/browse/HDFS-12641
> Project: Hadoop HDFS
>  Issue Type: Task
>Affects Versions: 2.7.4
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-12641.branch-2.7.001.patch
>
>
> Our internal testing caught a regression in HDFS-11445 when we cherry picked 
> the commit into CDH. Basically, it produces bogus missing file warnings. 
> Further analysis revealed that the regression is actually fixed by HDFS-11755.
> Because of the order commits are merged in branch-2.8 ~ trunk (HDFS-11755 was 
> committed before HDFS-11445), the regression was never actually surfaced for 
> Hadoop 2.8/3.0.0-(alpha/beta) users. Since branch-2.7 has HDFS-11445 but no 
> HDFS-11755, I suspect the regression is more visible for Hadoop 2.7.4.
> I am filing this jira to raise more awareness, than simply backporting 
> HDFS-11755 into branch-2.7.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9666) Enable hdfs-client to read even remote SSD/RAM prior to local disk replica to improve random read

2018-03-01 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated HDFS-9666:

Status: Patch Available  (was: Open)

> Enable hdfs-client to read even remote SSD/RAM prior to local disk replica to 
> improve random read
> -
>
> Key: HDFS-9666
> URL: https://issues.apache.org/jira/browse/HDFS-9666
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.7.0, 2.6.0
>Reporter: ade
>Assignee: ade
>Priority: Major
> Attachments: HDFS-9666.0.patch, HDFS-9666.001.patch
>
>
> We want to improve random read performance of HDFS for HBase, so enabled the 
> heterogeneous storage in our cluster. But there are only ~50% of datanode & 
> regionserver hosts with SSD. we can set hfile with only ONE_SSD not ALL_SSD 
> storagepolicy and the regionserver on none-SSD host can only read the local 
> disk replica . So we developed this feature in hdfs client to read even 
> remote SSD/RAM prior to local disk replica.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-9666) Enable hdfs-client to read even remote SSD/RAM prior to local disk replica to improve random read

2018-03-01 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  reassigned HDFS-9666:
---

Assignee: Jiandan Yang   (was: ade)

> Enable hdfs-client to read even remote SSD/RAM prior to local disk replica to 
> improve random read
> -
>
> Key: HDFS-9666
> URL: https://issues.apache.org/jira/browse/HDFS-9666
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.6.0, 2.7.0
>Reporter: ade
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: HDFS-9666.0.patch, HDFS-9666.001.patch
>
>
> We want to improve random read performance of HDFS for HBase, so enabled the 
> heterogeneous storage in our cluster. But there are only ~50% of datanode & 
> regionserver hosts with SSD. we can set hfile with only ONE_SSD not ALL_SSD 
> storagepolicy and the regionserver on none-SSD host can only read the local 
> disk replica . So we developed this feature in hdfs client to read even 
> remote SSD/RAM prior to local disk replica.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9666) Enable hdfs-client to read even remote SSD/RAM prior to local disk replica to improve random read

2018-03-01 Thread Jiandan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383083#comment-16383083
 ] 

Jiandan Yang  commented on HDFS-9666:
-

upload v1 patch based trunk

> Enable hdfs-client to read even remote SSD/RAM prior to local disk replica to 
> improve random read
> -
>
> Key: HDFS-9666
> URL: https://issues.apache.org/jira/browse/HDFS-9666
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.6.0, 2.7.0
>Reporter: ade
>Assignee: ade
>Priority: Major
> Attachments: HDFS-9666.0.patch, HDFS-9666.001.patch
>
>
> We want to improve random read performance of HDFS for HBase, so enabled the 
> heterogeneous storage in our cluster. But there are only ~50% of datanode & 
> regionserver hosts with SSD. we can set hfile with only ONE_SSD not ALL_SSD 
> storagepolicy and the regionserver on none-SSD host can only read the local 
> disk replica . So we developed this feature in hdfs client to read even 
> remote SSD/RAM prior to local disk replica.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9666) Enable hdfs-client to read even remote SSD/RAM prior to local disk replica to improve random read

2018-03-01 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated HDFS-9666:

Attachment: HDFS-9666.001.patch

> Enable hdfs-client to read even remote SSD/RAM prior to local disk replica to 
> improve random read
> -
>
> Key: HDFS-9666
> URL: https://issues.apache.org/jira/browse/HDFS-9666
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.6.0, 2.7.0
>Reporter: ade
>Assignee: ade
>Priority: Major
> Attachments: HDFS-9666.0.patch, HDFS-9666.001.patch
>
>
> We want to improve random read performance of HDFS for HBase, so enabled the 
> heterogeneous storage in our cluster. But there are only ~50% of datanode & 
> regionserver hosts with SSD. we can set hfile with only ONE_SSD not ALL_SSD 
> storagepolicy and the regionserver on none-SSD host can only read the local 
> disk replica . So we developed this feature in hdfs client to read even 
> remote SSD/RAM prior to local disk replica.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13216) HDFS

2018-03-01 Thread mster (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mster updated HDFS-13216:
-
Component/s: snapshots

> HDFS
> 
>
> Key: HDFS-13216
> URL: https://issues.apache.org/jira/browse/HDFS-13216
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, snapshots
>Affects Versions: 3.0.0
>Reporter: mster
>Priority: Major
>  Labels: INodesInPath
>
> if (!isRef && isDir && dir.isWithSnapshot()) {
> } else if (isRef && isDir && !lastComp) {
> // we should get the snapshotID of INodeReference  in path but we cant enter 
> this branch
>  }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13216) HDFS

2018-03-01 Thread mster (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mster updated HDFS-13216:
-
Description: 
if (!isRef && isDir && dir.isWithSnapshot()) {

} else if (isRef && isDir && !lastComp) {

// we should get the snapshotID of INodeReference  in path but we cant enter 
this branch

 }

  was:
if (!isRef && isDir && dir.isWithSnapshot()) {
 //if the path is a non-snapshot path, update the latest snapshot.
 if (!isSnapshot && shouldUpdateLatestId(
 dir.getDirectoryWithSnapshotFeature().getLastSnapshotId(),
 snapshotId)) {
 snapshotId = dir.getDirectoryWithSnapshotFeature().getLastSnapshotId();
 }
} else if (isRef && isDir && !lastComp) {

// we cant resove INodeReference in path

 }


> HDFS
> 
>
> Key: HDFS-13216
> URL: https://issues.apache.org/jira/browse/HDFS-13216
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, snapshots
>Affects Versions: 3.0.0
>Reporter: mster
>Priority: Major
>  Labels: INodesInPath
>
> if (!isRef && isDir && dir.isWithSnapshot()) {
> } else if (isRef && isDir && !lastComp) {
> // we should get the snapshotID of INodeReference  in path but we cant enter 
> this branch
>  }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13204) RBF: Optimize name service safe mode icon

2018-03-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383076#comment-16383076
 ] 

Íñigo Goiri commented on HDFS-13204:


What would {{federationdfshealth-router-safemode}} be?

> RBF: Optimize name service safe mode icon
> -
>
> Key: HDFS-13204
> URL: https://issues.apache.org/jira/browse/HDFS-13204
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: liuhongtong
>Priority: Minor
> Attachments: HDFS-13204.001.patch, image-2018-02-28-18-33-09-972.png, 
> image-2018-02-28-18-33-47-661.png, image-2018-02-28-18-35-35-708.png
>
>
> In federation health webpage, the safe mode icons of Subclusters and Routers 
> are inconsistent.
> The safe mode icon of Subclusters may induce users the name service is 
> maintaining.
> !image-2018-02-28-18-33-09-972.png!
> The safe mode icon of Routers:
> !image-2018-02-28-18-33-47-661.png!
> In fact, if the name service is in safe mode, users can't do writing related 
> operations. So I think the safe mode icon in Subclusters should be modified, 
> which may be more reasonable.
> !image-2018-02-28-18-35-35-708.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13214) RBF: Configuration on Router conflicts with client side configuration

2018-03-01 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383074#comment-16383074
 ] 

Yiqun Lin commented on HDFS-13214:
--

Hi [~Tao Jie], I just looked into the related logic of this. Looks like the 
Router wants to monitor its local node but doesn't find the right nsId.
{quote}
we configure dfs.nameservice.id
{quote}
I'm think this way can fix this issue. You may configure like following:
{noformat}

dfs.nameservice.id
ns-fed

{noformat}
Not so sure if this also makes sense in HA mode.

[~elgoiri], some thoughts getting from this, there are two things we may need 
to complete.

# We would document more details of setting up the RBF env, including HA or 
not-HA mode, and which settings must be configured. This will help the users to 
use RBF.
# We may need to add the unit test for testing setup of RBF in different env 
(HA or not-HA). This will help us catch some issues in Router's startup.

I can help do the #2 if you are also agreed with this.

> RBF: Configuration on Router conflicts with client side configuration
> -
>
> Key: HDFS-13214
> URL: https://issues.apache.org/jira/browse/HDFS-13214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Tao Jie
>Priority: Major
>
> In a typical router-based federation cluster, hdfs-site.xml is supposed to be:
> {code}
> 
> dfs.nameservices
> ns1,ns2,ns-fed
>   
>   
> dfs.ha.namenodes.ns-fed
> r1,r2
>   
>   
> dfs.namenode.rpc-address.ns1
> host1:8020
>   
>   
> dfs.namenode.rpc-address.ns2
> host2:8020
>   
>   
> dfs.namenode.rpc-address.ns-fed.r1
> host1:
>   
>   
> dfs.namenode.rpc-address.ns-fed.r2
> host2:
>   
> {code}
> {{dfs.ha.namenodes.ns-fed}} here is used for client to access the Router. 
> However with this configuration on server node, Router fails to start with 
> error:
> {code}
> org.apache.hadoop.HadoopIllegalArgumentException: Configuration has multiple 
> addresses that match local node's address. Please configure the system with 
> dfs.nameservice.id and dfs.ha.namenode.id
> at org.apache.hadoop.hdfs.DFSUtil.getSuffixIDs(DFSUtil.java:1198)
> at org.apache.hadoop.hdfs.DFSUtil.getNameServiceId(DFSUtil.java:1131)
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNamenodeNameServiceId(DFSUtil.java:1086)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createLocalNamenodeHearbeatService(Router.java:466)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createNamenodeHearbeatServices(Router.java:423)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.serviceInit(Router.java:199)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter.main(DFSRouter.java:69)
> 2018-03-01 18:05:56,208 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter: Failed to start 
> router
> {code}
> Then the router tries to find the local namenode, multiple properties: 
> {{dfs.namenode.rpc-address.ns1}}, {{dfs.namenode.rpc-address.ns-fed.r1}} 
> match the local address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12641) Backport HDFS-11755 into branch-2.7 to fix a regression in HDFS-11445

2018-03-01 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383065#comment-16383065
 ] 

Arpit Agarwal commented on HDFS-12641:
--

+1 on the change, it looks correct to me. I also see the new test passes even 
after reverting the key part of the fix.
{code}
-  if (!namesystem.isPopulatingReplQueues()) {
+  if (!namesystem.isPopulatingReplQueues() || !block.isComplete()) {
{code}
I haven't yet debugged what needs to be fixed in the test case.

> Backport HDFS-11755 into branch-2.7 to fix a regression in HDFS-11445
> -
>
> Key: HDFS-12641
> URL: https://issues.apache.org/jira/browse/HDFS-12641
> Project: Hadoop HDFS
>  Issue Type: Task
>Affects Versions: 2.7.4
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-12641.branch-2.7.001.patch
>
>
> Our internal testing caught a regression in HDFS-11445 when we cherry picked 
> the commit into CDH. Basically, it produces bogus missing file warnings. 
> Further analysis revealed that the regression is actually fixed by HDFS-11755.
> Because of the order commits are merged in branch-2.8 ~ trunk (HDFS-11755 was 
> committed before HDFS-11445), the regression was never actually surfaced for 
> Hadoop 2.8/3.0.0-(alpha/beta) users. Since branch-2.7 has HDFS-11445 but no 
> HDFS-11755, I suspect the regression is more visible for Hadoop 2.7.4.
> I am filing this jira to raise more awareness, than simply backporting 
> HDFS-11755 into branch-2.7.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12257) Expose getSnapshottableDirListing as a public API in HdfsAdmin

2018-03-01 Thread Huafeng Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383063#comment-16383063
 ] 

Huafeng Wang commented on HDFS-12257:
-

Hi [~Sammi], I'm not sure about that. The patch hasn't been reviewed and looks 
like it has conflicts with trunk now so it has to be revised. I can try to 
update the patch but I'm afraid it will take few days.

> Expose getSnapshottableDirListing as a public API in HdfsAdmin
> --
>
> Key: HDFS-12257
> URL: https://issues.apache.org/jira/browse/HDFS-12257
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Huafeng Wang
>Priority: Major
> Attachments: HDFS-12257.001.patch, HDFS-12257.002.patch, 
> HDFS-12257.003.patch
>
>
> Found at HIVE-16294. We have a CLI API for listing snapshottable dirs, but no 
> programmatic API. Other snapshot APIs are exposed in HdfsAdmin, I think we 
> should expose listing there as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11885) createEncryptionZone should not block on initializing EDEK cache

2018-03-01 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383051#comment-16383051
 ] 

genericqa commented on HDFS-11885:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-11885 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11885 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880136/HDFS-11885.004.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23260/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> createEncryptionZone should not block on initializing EDEK cache
> 
>
> Key: HDFS-11885
> URL: https://issues.apache.org/jira/browse/HDFS-11885
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Major
> Attachments: HDFS-11885.001.patch, HDFS-11885.002.patch, 
> HDFS-11885.003.patch, HDFS-11885.004.patch
>
>
> When creating an encryption zone, we call {{ensureKeyIsInitialized}}, which 
> calls {{provider.warmUpEncryptedKeys(keyName)}}. This is a blocking call, 
> which attempts to fill the key cache up to the low watermark.
> If the KMS is down or slow, this can take a very long time, and cause the 
> createZone RPC to fail with a timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11885) createEncryptionZone should not block on initializing EDEK cache

2018-03-01 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383049#comment-16383049
 ] 

SammiChen commented on HDFS-11885:
--

Is it still on target for 2.9.1?  If not, can we push this out from 2.9.1 to 
next release? 

> createEncryptionZone should not block on initializing EDEK cache
> 
>
> Key: HDFS-11885
> URL: https://issues.apache.org/jira/browse/HDFS-11885
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Major
> Attachments: HDFS-11885.001.patch, HDFS-11885.002.patch, 
> HDFS-11885.003.patch, HDFS-11885.004.patch
>
>
> When creating an encryption zone, we call {{ensureKeyIsInitialized}}, which 
> calls {{provider.warmUpEncryptedKeys(keyName)}}. This is a blocking call, 
> which attempts to fill the key cache up to the low watermark.
> If the KMS is down or slow, this can take a very long time, and cause the 
> createZone RPC to fail with a timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13204) RBF: Optimize name service safe mode icon

2018-03-01 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383039#comment-16383039
 ] 

maobaolong commented on HDFS-13204:
---

[~elgoiri] Do you think the following make sense?

- Routers
- Active: federationdfshealth-router-alive
- Safe mode:federationdfshealth-router-safemode
- Unavailable: federationdfshealth-router-unavailable

> RBF: Optimize name service safe mode icon
> -
>
> Key: HDFS-13204
> URL: https://issues.apache.org/jira/browse/HDFS-13204
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: liuhongtong
>Priority: Minor
> Attachments: HDFS-13204.001.patch, image-2018-02-28-18-33-09-972.png, 
> image-2018-02-28-18-33-47-661.png, image-2018-02-28-18-35-35-708.png
>
>
> In federation health webpage, the safe mode icons of Subclusters and Routers 
> are inconsistent.
> The safe mode icon of Subclusters may induce users the name service is 
> maintaining.
> !image-2018-02-28-18-33-09-972.png!
> The safe mode icon of Routers:
> !image-2018-02-28-18-33-47-661.png!
> In fact, if the name service is in safe mode, users can't do writing related 
> operations. So I think the safe mode icon in Subclusters should be modified, 
> which may be more reasonable.
> !image-2018-02-28-18-35-35-708.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13216) HDFS

2018-03-01 Thread mster (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mster updated HDFS-13216:
-
Description: 
if (!isRef && isDir && dir.isWithSnapshot()) {
 //if the path is a non-snapshot path, update the latest snapshot.
 if (!isSnapshot && shouldUpdateLatestId(
 dir.getDirectoryWithSnapshotFeature().getLastSnapshotId(),
 snapshotId)) {
 snapshotId = dir.getDirectoryWithSnapshotFeature().getLastSnapshotId();
 }
} else if (isRef && isDir && !lastComp) {

// we cant resove INodeReference in path

 }

> HDFS
> 
>
> Key: HDFS-13216
> URL: https://issues.apache.org/jira/browse/HDFS-13216
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: mster
>Priority: Major
>  Labels: INodesInPath
>
> if (!isRef && isDir && dir.isWithSnapshot()) {
>  //if the path is a non-snapshot path, update the latest snapshot.
>  if (!isSnapshot && shouldUpdateLatestId(
>  dir.getDirectoryWithSnapshotFeature().getLastSnapshotId(),
>  snapshotId)) {
>  snapshotId = dir.getDirectoryWithSnapshotFeature().getLastSnapshotId();
>  }
> } else if (isRef && isDir && !lastComp) {
> // we cant resove INodeReference in path
>  }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12257) Expose getSnapshottableDirListing as a public API in HdfsAdmin

2018-03-01 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383034#comment-16383034
 ] 

genericqa commented on HDFS-12257:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-12257 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12257 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889633/HDFS-12257.003.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23259/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Expose getSnapshottableDirListing as a public API in HdfsAdmin
> --
>
> Key: HDFS-12257
> URL: https://issues.apache.org/jira/browse/HDFS-12257
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Huafeng Wang
>Priority: Major
> Attachments: HDFS-12257.001.patch, HDFS-12257.002.patch, 
> HDFS-12257.003.patch
>
>
> Found at HIVE-16294. We have a CLI API for listing snapshottable dirs, but no 
> programmatic API. Other snapshot APIs are exposed in HdfsAdmin, I think we 
> should expose listing there as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12257) Expose getSnapshottableDirListing as a public API in HdfsAdmin

2018-03-01 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383028#comment-16383028
 ] 

SammiChen edited comment on HDFS-12257 at 3/2/18 2:29 AM:
--

Hi  [~HuafengWang],  does this target for 2.9.1?  If not, can we push this out 
to next 2.9.2 release? 


was (Author: sammi):
Hi  [~HuafengWang],  does this target for 2.9.1?  If not, can we push this out 
to next 2.9 release? 

> Expose getSnapshottableDirListing as a public API in HdfsAdmin
> --
>
> Key: HDFS-12257
> URL: https://issues.apache.org/jira/browse/HDFS-12257
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Huafeng Wang
>Priority: Major
> Attachments: HDFS-12257.001.patch, HDFS-12257.002.patch, 
> HDFS-12257.003.patch
>
>
> Found at HIVE-16294. We have a CLI API for listing snapshottable dirs, but no 
> programmatic API. Other snapshot APIs are exposed in HdfsAdmin, I think we 
> should expose listing there as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12257) Expose getSnapshottableDirListing as a public API in HdfsAdmin

2018-03-01 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383028#comment-16383028
 ] 

SammiChen commented on HDFS-12257:
--

Hi  [~HuafengWang],  does this target for 2.9.1?  If not, can we push this out 
to next 2.9 release? 

> Expose getSnapshottableDirListing as a public API in HdfsAdmin
> --
>
> Key: HDFS-12257
> URL: https://issues.apache.org/jira/browse/HDFS-12257
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Huafeng Wang
>Priority: Major
> Attachments: HDFS-12257.001.patch, HDFS-12257.002.patch, 
> HDFS-12257.003.patch
>
>
> Found at HIVE-16294. We have a CLI API for listing snapshottable dirs, but no 
> programmatic API. Other snapshot APIs are exposed in HdfsAdmin, I think we 
> should expose listing there as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13216) HDFS

2018-03-01 Thread mster (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mster updated HDFS-13216:
-
Labels: INodesInPath  (was: )

> HDFS
> 
>
> Key: HDFS-13216
> URL: https://issues.apache.org/jira/browse/HDFS-13216
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: mster
>Priority: Major
>  Labels: INodesInPath
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13216) HDFS

2018-03-01 Thread mster (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mster updated HDFS-13216:
-
Affects Version/s: 3.0.0

> HDFS
> 
>
> Key: HDFS-13216
> URL: https://issues.apache.org/jira/browse/HDFS-13216
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: mster
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13216) HDFS

2018-03-01 Thread mster (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mster updated HDFS-13216:
-
Component/s: hdfs

> HDFS
> 
>
> Key: HDFS-13216
> URL: https://issues.apache.org/jira/browse/HDFS-13216
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: mster
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13216) HDFS

2018-03-01 Thread mster (JIRA)
mster created HDFS-13216:


 Summary: HDFS
 Key: HDFS-13216
 URL: https://issues.apache.org/jira/browse/HDFS-13216
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: mster






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-1686) Federation: Add more Balancer tests with federation setting

2018-03-01 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383010#comment-16383010
 ] 

genericqa commented on HDFS-1686:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 25 unchanged - 1 fixed = 25 total (was 26) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}146m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}194m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.TestBlocksScheduledCounter |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-1686 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912680/HDFS-1686.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 126430fb21fe 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 96e8f26 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Updated] (HDFS-13197) Ozone: Fix ConfServlet#getOzoneTags cmd after HADOOP-15007

2018-03-01 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13197:
--
Status: Patch Available  (was: Open)

> Ozone: Fix ConfServlet#getOzoneTags cmd after HADOOP-15007
> --
>
> Key: HDFS-13197
> URL: https://issues.apache.org/jira/browse/HDFS-13197
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13197-HDFS-7240.000.patch
>
>
> This is broken after merging trunk change HADOOP-15007 into HDFS-7240 branch. 
> I remove the cmd and related test to have a clean merge. [~ajakumar], please 
> fix the cmd and bring back the related test. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13210) Fix the typo in MiniDFSCluster class

2018-03-01 Thread fang zhenyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382941#comment-16382941
 ] 

fang zhenyi commented on HDFS-13210:


Fix checkstyle in HDFS-13210.003.patch.

> Fix the typo in MiniDFSCluster class 
> -
>
> Key: HDFS-13210
> URL: https://issues.apache.org/jira/browse/HDFS-13210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: fang zhenyi
>Priority: Trivial
> Attachments: HDFS-13210.001.patch, HDFS-13210.002.patch, 
> HDFS-13210.003.patch
>
>
> There is a typo {{SimilatedFSDataset}} in {{MiniDFSCluster#injectBlocks}}.
>  In line2748 and line2769:
> {code:java}
> public void injectBlocks(int dataNodeIndex,
>   Iterable blocksToInject, String bpid) throws IOException {
> if (dataNodeIndex < 0 || dataNodeIndex > dataNodes.size()) {
>   throw new IndexOutOfBoundsException();
> }
> final DataNode dn = dataNodes.get(dataNodeIndex).datanode;
> final FsDatasetSpi dataSet = DataNodeTestUtils.getFSDataset(dn);
> if (!(dataSet instanceof SimulatedFSDataset)) {
>   throw new IOException("injectBlocks is valid only for 
> SimilatedFSDataset");
> }
> ...
> }
> {code}
> {{SimilatedFSDataset}} should be {{SimulatedFSDataset}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13210) Fix the typo in MiniDFSCluster class

2018-03-01 Thread fang zhenyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang zhenyi updated HDFS-13210:
---
Status: In Progress  (was: Patch Available)

> Fix the typo in MiniDFSCluster class 
> -
>
> Key: HDFS-13210
> URL: https://issues.apache.org/jira/browse/HDFS-13210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: fang zhenyi
>Priority: Trivial
> Attachments: HDFS-13210.001.patch, HDFS-13210.002.patch, 
> HDFS-13210.003.patch
>
>
> There is a typo {{SimilatedFSDataset}} in {{MiniDFSCluster#injectBlocks}}.
>  In line2748 and line2769:
> {code:java}
> public void injectBlocks(int dataNodeIndex,
>   Iterable blocksToInject, String bpid) throws IOException {
> if (dataNodeIndex < 0 || dataNodeIndex > dataNodes.size()) {
>   throw new IndexOutOfBoundsException();
> }
> final DataNode dn = dataNodes.get(dataNodeIndex).datanode;
> final FsDatasetSpi dataSet = DataNodeTestUtils.getFSDataset(dn);
> if (!(dataSet instanceof SimulatedFSDataset)) {
>   throw new IOException("injectBlocks is valid only for 
> SimilatedFSDataset");
> }
> ...
> }
> {code}
> {{SimilatedFSDataset}} should be {{SimulatedFSDataset}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13210) Fix the typo in MiniDFSCluster class

2018-03-01 Thread fang zhenyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang zhenyi updated HDFS-13210:
---
Attachment: HDFS-13210.003.patch

> Fix the typo in MiniDFSCluster class 
> -
>
> Key: HDFS-13210
> URL: https://issues.apache.org/jira/browse/HDFS-13210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: fang zhenyi
>Priority: Trivial
> Attachments: HDFS-13210.001.patch, HDFS-13210.002.patch, 
> HDFS-13210.003.patch
>
>
> There is a typo {{SimilatedFSDataset}} in {{MiniDFSCluster#injectBlocks}}.
>  In line2748 and line2769:
> {code:java}
> public void injectBlocks(int dataNodeIndex,
>   Iterable blocksToInject, String bpid) throws IOException {
> if (dataNodeIndex < 0 || dataNodeIndex > dataNodes.size()) {
>   throw new IndexOutOfBoundsException();
> }
> final DataNode dn = dataNodes.get(dataNodeIndex).datanode;
> final FsDatasetSpi dataSet = DataNodeTestUtils.getFSDataset(dn);
> if (!(dataSet instanceof SimulatedFSDataset)) {
>   throw new IOException("injectBlocks is valid only for 
> SimilatedFSDataset");
> }
> ...
> }
> {code}
> {{SimilatedFSDataset}} should be {{SimulatedFSDataset}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13210) Fix the typo in MiniDFSCluster class

2018-03-01 Thread fang zhenyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang zhenyi updated HDFS-13210:
---
Status: Patch Available  (was: In Progress)

> Fix the typo in MiniDFSCluster class 
> -
>
> Key: HDFS-13210
> URL: https://issues.apache.org/jira/browse/HDFS-13210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: fang zhenyi
>Priority: Trivial
> Attachments: HDFS-13210.001.patch, HDFS-13210.002.patch, 
> HDFS-13210.003.patch
>
>
> There is a typo {{SimilatedFSDataset}} in {{MiniDFSCluster#injectBlocks}}.
>  In line2748 and line2769:
> {code:java}
> public void injectBlocks(int dataNodeIndex,
>   Iterable blocksToInject, String bpid) throws IOException {
> if (dataNodeIndex < 0 || dataNodeIndex > dataNodes.size()) {
>   throw new IndexOutOfBoundsException();
> }
> final DataNode dn = dataNodes.get(dataNodeIndex).datanode;
> final FsDatasetSpi dataSet = DataNodeTestUtils.getFSDataset(dn);
> if (!(dataSet instanceof SimulatedFSDataset)) {
>   throw new IOException("injectBlocks is valid only for 
> SimilatedFSDataset");
> }
> ...
> }
> {code}
> {{SimilatedFSDataset}} should be {{SimulatedFSDataset}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-03-01 Thread Dennis Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382938#comment-16382938
 ] 

Dennis Huo commented on HDFS-13056:
---

[~ajayydv] Thanks for helping test and reviewing! Applied most of your comments 
in patch .004.

 

As for renaming the parameter to blockChecksumType, in an earlier patch the 
functions did indeed take a BlockChecksumType directly, but it became necessary 
to add the stripeLength field for stripe reconstruction purposes, and thus I 
created the BlockChecksumOptions struct to hold both a BlockChecksumType as 
well as stripeLength, and accordingly renamed the parameters to 
blockChecksumOptions. It seems to me some places might be handling a 
blockChecksumType variable separately from blockChecksumOptions, so renaming to 
blockChecksumType could cause confusion. I could be missing something though, 
I'll be happy to hear other reasons for renaming to blockChecksumType or any 
suggestions at other parameter names entirely.

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.poc1.patch, HDFS-13056.001.patch, HDFS-13056.002.patch, 
> HDFS-13056.003.patch, HDFS-13056.003.patch, HDFS-13056.004.patch, 
> Reference_only_zhen_PPOC_hadoop2.6.X.diff, hdfs-file-composite-crc32-v1.pdf, 
> hdfs-file-composite-crc32-v2.pdf, hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-03-01 Thread Dennis Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Huo updated HDFS-13056:
--
Status: Patch Available  (was: Open)

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.poc1.patch, HDFS-13056.001.patch, HDFS-13056.002.patch, 
> HDFS-13056.003.patch, HDFS-13056.003.patch, HDFS-13056.004.patch, 
> Reference_only_zhen_PPOC_hadoop2.6.X.diff, hdfs-file-composite-crc32-v1.pdf, 
> hdfs-file-composite-crc32-v2.pdf, hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-03-01 Thread Dennis Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Huo updated HDFS-13056:
--
Status: Open  (was: Patch Available)

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.poc1.patch, HDFS-13056.001.patch, HDFS-13056.002.patch, 
> HDFS-13056.003.patch, HDFS-13056.003.patch, HDFS-13056.004.patch, 
> Reference_only_zhen_PPOC_hadoop2.6.X.diff, hdfs-file-composite-crc32-v1.pdf, 
> hdfs-file-composite-crc32-v2.pdf, hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-03-01 Thread Dennis Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Huo updated HDFS-13056:
--
Attachment: HDFS-13056.004.patch

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.poc1.patch, HDFS-13056.001.patch, HDFS-13056.002.patch, 
> HDFS-13056.003.patch, HDFS-13056.003.patch, HDFS-13056.004.patch, 
> Reference_only_zhen_PPOC_hadoop2.6.X.diff, hdfs-file-composite-crc32-v1.pdf, 
> hdfs-file-composite-crc32-v2.pdf, hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11394) Add method for getting erasure coding policy through WebHDFS

2018-03-01 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382905#comment-16382905
 ] 

Hanisha Koneru commented on HDFS-11394:
---

Hi [~lewuathe], are you planning to continue working on this Jira? If not, I 
would like to take it up. Please let me know.

> Add method for getting erasure coding policy through WebHDFS 
> -
>
> Key: HDFS-11394
> URL: https://issues.apache.org/jira/browse/HDFS-11394
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, namenode
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Major
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-11394.01.patch, HDFS-11394.02.patch, 
> HDFS-11394.03.patch, HDFS-11394.04.patch
>
>
> We can expose erasure coding policy by erasure coded directory through 
> WebHDFS method as well as storage policy. This information can be used by 
> NameNode Web UI and show the detail of erasure coded directories.
> see: HDFS-8196



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13215) RBF: Move Router to its own module

2018-03-01 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382888#comment-16382888
 ] 

Chris Douglas commented on HDFS-13215:
--

Breaking server-side code into multiple modules is less useful. IIRC, RBF is 
mostly in its own package, and since we upgraded surefire it's easier to run 
subsets of tests using its regex syntax. But sure, it's mostly stand-alone, and 
might as well live in its own package. If tools like the rebalancer take a 
dependency on this package to customize their behavior in federated 
environments, those may have to move into a separate package to avoid the 
circular dependency.

> RBF: Move Router to its own module
> --
>
> Key: HDFS-13215
> URL: https://issues.apache.org/jira/browse/HDFS-13215
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Major
>
> We are splitting the HDFS client code base and potentially Router-based 
> Federation is also independent enough to be in its own package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13197) Ozone: Fix ConfServlet#getOzoneTags cmd after HADOOP-15007

2018-03-01 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13197:
--
Attachment: HDFS-13197-HDFS-7240.000.patch

> Ozone: Fix ConfServlet#getOzoneTags cmd after HADOOP-15007
> --
>
> Key: HDFS-13197
> URL: https://issues.apache.org/jira/browse/HDFS-13197
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13197-HDFS-7240.000.patch
>
>
> This is broken after merging trunk change HADOOP-15007 into HDFS-7240 branch. 
> I remove the cmd and related test to have a clean merge. [~ajakumar], please 
> fix the cmd and bring back the related test. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-1686) Federation: Add more Balancer tests with federation setting

2018-03-01 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-1686:
--
Hadoop Flags: Reviewed

+1 the 02 patch looks good.  Will wait for the Jenkins report.

> Federation: Add more Balancer tests with federation setting
> ---
>
> Key: HDFS-1686
> URL: https://issues.apache.org/jira/browse/HDFS-1686
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer  mover, test
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: 4358946.patch, HDFS-1686.00.patch, HDFS-1686.01.patch, 
> HDFS-1686.02.patch, h1686_20110303.patch
>
>
> A test with 3 Namenodes and 4 Datanodes in startup, and then adding 2 new 
> Datanodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13197) Ozone: Fix ConfServlet#getOzoneTags cmd after HADOOP-15007

2018-03-01 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13197:
--
Fix Version/s: HDFS-7240

> Ozone: Fix ConfServlet#getOzoneTags cmd after HADOOP-15007
> --
>
> Key: HDFS-13197
> URL: https://issues.apache.org/jira/browse/HDFS-13197
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: HDFS-7240
>
>
> This is broken after merging trunk change HADOOP-15007 into HDFS-7240 branch. 
> I remove the cmd and related test to have a clean merge. [~ajakumar], please 
> fix the cmd and bring back the related test. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13171) Handle Deletion of nodes in SnasphotSkipList

2018-03-01 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382768#comment-16382768
 ] 

Tsz Wo Nicholas Sze edited comment on HDFS-13171 at 3/1/18 10:44 PM:
-

- testRemove2() incorrectly calls the static testRemove (no "2").  After fixing 
it, testRemove2 fails
{code}
java.lang.AssertionError: i = 0
computed = [c1]
expected = []

at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestDirectoryDiffList.verifyChildrenList(TestDirectoryDiffList.java:87)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestDirectoryDiffList.testRemove2(TestDirectoryDiffList.java:246)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestDirectoryDiffList.testRemove2(TestDirectoryDiffList.java:234)
...
Caused by: java.lang.AssertionError: expected:<0> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
...
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestDirectoryDiffList.assertList(TestDirectoryDiffList.java:72)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestDirectoryDiffList.verifyChildrenList(TestDirectoryDiffList.java:85)
... 26 more
{code}
- Let's rename
-* testRemove -> testRemoveFromTail
-* testRemove2 -> testReomveFromHead
- Let's add a trim method to trim the head.
{code}
//SkipListNode
void trim() {
  for (int level = level(); level > 0 && getSkipNode(level) == null; 
level--) {
skipDiffList.remove(level);
  }
}
{code}




was (Author: szetszwo):
- As mentioned previously, the remove code never updates head. There is some 
bugs.
-* BTW, testRemove2() incorrectly calls the static testRemove (no "2").  After 
fixing it, testRemove2 fails
{code}
ava.lang.AssertionError: i = 0
computed = [c1]
expected = []

at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestDirectoryDiffList.verifyChildrenList(TestDirectoryDiffList.java:87)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestDirectoryDiffList.testRemove2(TestDirectoryDiffList.java:246)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestDirectoryDiffList.testRemove2(TestDirectoryDiffList.java:234)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at 
com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
Caused by: java.lang.AssertionError: expected:<0> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestDirectoryDiffList.assertList(TestDirectoryDiffList.java:72)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestDirectoryDiffList.verifyChildrenList(TestDirectoryDiffList.java:85)
... 26 more
{code}
- Let's rename
-* testRemove -> testRemoveFromTail
-* 

[jira] [Commented] (HDFS-1686) Federation: Add more Balancer tests with federation setting

2018-03-01 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382795#comment-16382795
 ] 

Bharat Viswanadham commented on HDFS-1686:
--

[~szetszwo]

Thanks for review.

replication factor is multipled because, when balacer parameters is passed, 
replication factor two is used.
{code:java}
s = new Suite(cluster, nNameNodes, nDataNodes, params, conf, (short)2);{code}
 

In One of the test, I want to test both blockPools, I have updated 
testBalancingBlockpoolsWithBlockPoolPolicy, to pass value 2 for 
nNameNodestoBalance.

 

> Federation: Add more Balancer tests with federation setting
> ---
>
> Key: HDFS-1686
> URL: https://issues.apache.org/jira/browse/HDFS-1686
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer  mover, test
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: 4358946.patch, HDFS-1686.00.patch, HDFS-1686.01.patch, 
> HDFS-1686.02.patch, h1686_20110303.patch
>
>
> A test with 3 Namenodes and 4 Datanodes in startup, and then adding 2 new 
> Datanodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-1686) Federation: Add more Balancer tests with federation setting

2018-03-01 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-1686:
-
Attachment: HDFS-1686.02.patch

> Federation: Add more Balancer tests with federation setting
> ---
>
> Key: HDFS-1686
> URL: https://issues.apache.org/jira/browse/HDFS-1686
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer  mover, test
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: 4358946.patch, HDFS-1686.00.patch, HDFS-1686.01.patch, 
> HDFS-1686.02.patch, h1686_20110303.patch
>
>
> A test with 3 Namenodes and 4 Datanodes in startup, and then adding 2 new 
> Datanodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13211) Fix a bug in DirectoryDiffList.getMinListForRange

2018-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382790#comment-16382790
 ] 

Hudson commented on HDFS-13211:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13752 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13752/])
HDFS-13211. Fix a bug in DirectoryDiffList.getMinListForRange.  (szetszwo: rev 
96e8f260ab90cc7b5a5aa2a59c182ef20a028238)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryDiffList.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestDirectoryDiffList.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/ReadOnlyList.java


> Fix a bug in DirectoryDiffList.getMinListForRange
> -
>
> Key: HDFS-13211
> URL: https://issues.apache.org/jira/browse/HDFS-13211
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13211.001.patch, HDFS-13211.002.patch
>
>
> HDFS-13102 implements the DiffList interface for storing Directory Diffs 
> using SkipList.
> This Jira proposes to refactor the unit tests for HDFS-13102.
> We also have found a bug in DirectoryDiffList.getMinListForRange by the new 
> tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13171) Handle Deletion of nodes in SnasphotSkipList

2018-03-01 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382768#comment-16382768
 ] 

Tsz Wo Nicholas Sze commented on HDFS-13171:


- As mentioned previously, the remove code never updates head. There is some 
bugs.
-* BTW, testRemove2() incorrectly calls the static testRemove (no "2").  After 
fixing it, testRemove2 fails
{code}
ava.lang.AssertionError: i = 0
computed = [c1]
expected = []

at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestDirectoryDiffList.verifyChildrenList(TestDirectoryDiffList.java:87)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestDirectoryDiffList.testRemove2(TestDirectoryDiffList.java:246)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestDirectoryDiffList.testRemove2(TestDirectoryDiffList.java:234)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at 
com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
Caused by: java.lang.AssertionError: expected:<0> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestDirectoryDiffList.assertList(TestDirectoryDiffList.java:72)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestDirectoryDiffList.verifyChildrenList(TestDirectoryDiffList.java:85)
... 26 more
{code}
- Let's rename
-* testRemove -> testRemoveFromTail
-* testRemove2 -> testReomveFromHead

> Handle Deletion of nodes in SnasphotSkipList
> 
>
> Key: HDFS-13171
> URL: https://issues.apache.org/jira/browse/HDFS-13171
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13171.000.patch, HDFS-13171.001.patch
>
>
> This Jira will handle deletion of skipListNodes from DirectoryDiffList . If a 
> node has multiple levels, the list needs to be balanced .If the node is uni 
> level, no balancing of the list is required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13211) Fix a bug in DirectoryDiffList.getMinListForRange

2018-03-01 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-13211:
---
Description: 
HDFS-13102 implements the DiffList interface for storing Directory Diffs using 
SkipList.

This Jira proposes to refactor the unit tests for HDFS-13102.

We also have found a bug in DirectoryDiffList.getMinListForRange by the new 
tests.

  was:
HDFS-13102 implements the DiffList interface for storing Directory Diffs using 
SkipList.

This Jira proposes to refactor the unit tests for HDFS-13102.

We also have found a bug in the new tests.


> Fix a bug in DirectoryDiffList.getMinListForRange
> -
>
> Key: HDFS-13211
> URL: https://issues.apache.org/jira/browse/HDFS-13211
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13211.001.patch, HDFS-13211.002.patch
>
>
> HDFS-13102 implements the DiffList interface for storing Directory Diffs 
> using SkipList.
> This Jira proposes to refactor the unit tests for HDFS-13102.
> We also have found a bug in DirectoryDiffList.getMinListForRange by the new 
> tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13211) Fix a bug in DirectoryDiffList.getMinListForRange

2018-03-01 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-13211:
---
   Resolution: Fixed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Shashi!

> Fix a bug in DirectoryDiffList.getMinListForRange
> -
>
> Key: HDFS-13211
> URL: https://issues.apache.org/jira/browse/HDFS-13211
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13211.001.patch, HDFS-13211.002.patch
>
>
> HDFS-13102 implements the DiffList interface for storing Directory Diffs 
> using SkipList.
> This Jira proposes to refactor the unit tests for HDFS-13102.
> We also have found a bug in the new tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13211) Fix a bug in DirectoryDiffList.getMinListForRange

2018-03-01 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-13211:
---
Summary: Fix a bug in DirectoryDiffList.getMinListForRange  (was: 
Refactor Unit Tests for SnapshotSKipList)
Description: 
HDFS-13102 implements the DiffList interface for storing Directory Diffs using 
SkipList.

This Jira proposes to refactor the unit tests for HDFS-13102.

We also have found a bug in the new tests.

  was:
HDFS-13102 implements the DiffList interface for storing Directory Diffs using 
SkipList.

This Jira proposes to refactor the unit tests for HDFS-13102.

 Issue Type: Bug  (was: Improvement)

> Fix a bug in DirectoryDiffList.getMinListForRange
> -
>
> Key: HDFS-13211
> URL: https://issues.apache.org/jira/browse/HDFS-13211
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13211.001.patch, HDFS-13211.002.patch
>
>
> HDFS-13102 implements the DiffList interface for storing Directory Diffs 
> using SkipList.
> This Jira proposes to refactor the unit tests for HDFS-13102.
> We also have found a bug in the new tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13211) Fix a bug in DirectoryDiffList.getMinListForRange

2018-03-01 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-13211:
---
Hadoop Flags: Reviewed

+1 the 002 patch looks good.

> Fix a bug in DirectoryDiffList.getMinListForRange
> -
>
> Key: HDFS-13211
> URL: https://issues.apache.org/jira/browse/HDFS-13211
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13211.001.patch, HDFS-13211.002.patch
>
>
> HDFS-13102 implements the DiffList interface for storing Directory Diffs 
> using SkipList.
> This Jira proposes to refactor the unit tests for HDFS-13102.
> We also have found a bug in the new tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-1686) Federation: Add more Balancer tests with federation setting

2018-03-01 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382663#comment-16382663
 ] 

Tsz Wo Nicholas Sze commented on HDFS-1686:
---

- Question: why multiplying replication in computing totalUsed?
{code}
-  final long totalUsed = totalCapacity*3/10;
+  final long totalUsed = (totalCapacity * s.replication)*3/10;
{code}
- The content of both testBalancingBlockpoolsWithBlockPoolPolicy and 
test1OutOf2BlockpoolsWithBlockPoolPolicy are the same.  Forgot to modify one of 
them?

> Federation: Add more Balancer tests with federation setting
> ---
>
> Key: HDFS-1686
> URL: https://issues.apache.org/jira/browse/HDFS-1686
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer  mover, test
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: 4358946.patch, HDFS-1686.00.patch, HDFS-1686.01.patch, 
> h1686_20110303.patch
>
>
> A test with 3 Namenodes and 4 Datanodes in startup, and then adding 2 new 
> Datanodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13206) IllegalStateException: Unable to finalize edits file

2018-03-01 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HDFS-13206.
---
Resolution: Invalid

> IllegalStateException: Unable to finalize edits file
> 
>
> Key: HDFS-13206
> URL: https://issues.apache.org/jira/browse/HDFS-13206
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Priority: Minor
> Attachments: testFavoredNodeTableImport-output.txt
>
>
> I noticed the following in hbase test output running against hadoop3:
> {code}
> 2018-02-28 18:40:18,491 ERROR [Time-limited test] namenode.JournalSet(402): 
> Error: finalize log segment 1, 658 failed for (journal 
> JournalAndStream(mgr=FileJournalManager(root=/mnt/disk2/a/2-hbase/hbase-server/target/test-data/5670112c-31f1-43b0-af31-c1182e142e63/cluster_8f993609-c3a1-4fb4-8b3d-0e642261deb1/dfs/name-0-1),
>  stream=null))
> java.lang.IllegalStateException: Unable to finalize edits file 
> /mnt/disk2/a/2-hbase/hbase-server/target/test-data/5670112c-31f1-43b0-af31-c1182e142e63/cluster_8f993609-c3a1-4fb4-8b3d-0e642261deb1/dfs/name-0-1/current/edits_inprogress_001
>   at 
> org.apache.hadoop.hdfs.server.namenode.FileJournalManager.finalizeLogSegment(FileJournalManager.java:153)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet$2.apply(JournalSet.java:224)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.finalizeLogSegment(JournalSet.java:219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1427)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:398)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.close(FSEditLogAsync.java:110)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1320)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1909)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:1013)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.stopAndJoinNameNode(MiniDFSCluster.java:2047)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1987)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1958)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1951)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniDFSCluster(HBaseTestingUtility.java:767)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:1109)
>   at 
> org.apache.hadoop.hbase.master.balancer.TestFavoredNodeTableImport.stopCluster(TestFavoredNodeTableImport.java:71)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13212) RBF: Fix router location cache issue

2018-03-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382554#comment-16382554
 ] 

Íñigo Goiri commented on HDFS-13212:


HDFS-13208 reported the same issue (closed as duplicated of this one now).
Please check the discussion there for details on how to reproduce this.

> RBF: Fix router location cache issue
> 
>
> Key: HDFS-13212
> URL: https://issues.apache.org/jira/browse/HDFS-13212
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Reporter: Weiwei Wu
>Priority: Major
> Attachments: HDFS-13212-001.patch
>
>
> The MountTableResolver refreshEntries function have a bug when add a new 
> mount table entry which already have location cache. The old location cache 
> will never be invalid until this mount point change again.
> Need to invalid the location cache when add the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-1686) Federation: Add more Balancer tests with federation setting

2018-03-01 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382533#comment-16382533
 ] 

Tsz Wo Nicholas Sze commented on HDFS-1686:
---

Sure, will do.

> Federation: Add more Balancer tests with federation setting
> ---
>
> Key: HDFS-1686
> URL: https://issues.apache.org/jira/browse/HDFS-1686
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer  mover, test
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: 4358946.patch, HDFS-1686.00.patch, HDFS-1686.01.patch, 
> h1686_20110303.patch
>
>
> A test with 3 Namenodes and 4 Datanodes in startup, and then adding 2 new 
> Datanodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13208) RBF: Mount path not available after ADD-REMOVE-ADD

2018-03-01 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382519#comment-16382519
 ] 

Wei Yan edited comment on HDFS-13208 at 3/1/18 7:40 PM:


Ok, finally I got through the pipeline. In short, it can be resolved by 
HDFS-13212.

So when in step 2, we "rm" a mount point, and then issue a "ls" cmd. It will 
leave a record in the local cache. After step 3, although the mount point 
changed, the cache still refers to default NS, so a follow-up "ls" will still 
point to the wrong location (the cached one after step 2).

This cannot be reproduced in [~linyiqun] 's testcase, as it doesn't involve 
local cache operation.

Closing this ticket as a duplicate.


was (Author: ywskycn):
Ok, finally I got through the pipeline. In short, it can be resolved by 
HDFS-13212.

So when in step 2, we "rm" a mount point, and then issue a "ls" cmd. It will 
leave a record in the local cache. After step 3, although the mount point 
changed, the cache still refers to default NS, so a follow-up "ls" will still 
point to the wrong location (the cached one after step 2).

This cannot be reproduced in [~linyiqun] 's testcase, as it doesn't involve 
local cache operation (which needs to run FileSystem.listStatus()).

Closing this ticket as a duplicate.

> RBF: Mount path not available after ADD-REMOVE-ADD
> --
>
> Key: HDFS-13208
> URL: https://issues.apache.org/jira/browse/HDFS-13208
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Critical
>
> To reproduce this issue, run the following commands at Router 1:
> {code:java}
> $ hdfs dfsrouteradmin -add /test1 ns1 /ns1/test1
> $ hdfs dfsrouteradmin -rm /test1
> $ hdfs dfsrouteradmin -add /test1 ns1 /ns1/test1{code}
> "hdfs dfs -ls hdfs://Router1:8020/test1" works well after step 1. After step 
> 3 when we add /test1 back, Router 1 still returns "No such file or 
> directory". 
> But after step 3, when we run cmd "hdfs dfs -ls hdfs://Router2:8020/test1" 
> talking to another Router, it works well.
> From Router logs, I can see StateStoreZookeeperImpl and MountTableResolver 
> are updated correctly and in time. Not find the root case yet, still looking 
> into it.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13208) RBF: Mount path not available after ADD-REMOVE-ADD

2018-03-01 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan resolved HDFS-13208.

Resolution: Duplicate

> RBF: Mount path not available after ADD-REMOVE-ADD
> --
>
> Key: HDFS-13208
> URL: https://issues.apache.org/jira/browse/HDFS-13208
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Critical
>
> To reproduce this issue, run the following commands at Router 1:
> {code:java}
> $ hdfs dfsrouteradmin -add /test1 ns1 /ns1/test1
> $ hdfs dfsrouteradmin -rm /test1
> $ hdfs dfsrouteradmin -add /test1 ns1 /ns1/test1{code}
> "hdfs dfs -ls hdfs://Router1:8020/test1" works well after step 1. After step 
> 3 when we add /test1 back, Router 1 still returns "No such file or 
> directory". 
> But after step 3, when we run cmd "hdfs dfs -ls hdfs://Router2:8020/test1" 
> talking to another Router, it works well.
> From Router logs, I can see StateStoreZookeeperImpl and MountTableResolver 
> are updated correctly and in time. Not find the root case yet, still looking 
> into it.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13208) RBF: Mount path not available after ADD-REMOVE-ADD

2018-03-01 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382519#comment-16382519
 ] 

Wei Yan commented on HDFS-13208:


Ok, finally I got through the pipeline. In short, it can be resolved by 
HDFS-13212.

So when in step 2, we "rm" a mount point, and then issue a "ls" cmd. It will 
leave a record in the local cache. After step 3, although the mount point 
changed, the cache still refers to default NS, so a follow-up "ls" will still 
point to the wrong location (the cached one after step 2).

This cannot be reproduced in [~linyiqun] 's testcase, as it doesn't involve 
local cache operation (which needs to run FileSystem.listStatus()).

Closing this ticket as a duplicate.

> RBF: Mount path not available after ADD-REMOVE-ADD
> --
>
> Key: HDFS-13208
> URL: https://issues.apache.org/jira/browse/HDFS-13208
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Critical
>
> To reproduce this issue, run the following commands at Router 1:
> {code:java}
> $ hdfs dfsrouteradmin -add /test1 ns1 /ns1/test1
> $ hdfs dfsrouteradmin -rm /test1
> $ hdfs dfsrouteradmin -add /test1 ns1 /ns1/test1{code}
> "hdfs dfs -ls hdfs://Router1:8020/test1" works well after step 1. After step 
> 3 when we add /test1 back, Router 1 still returns "No such file or 
> directory". 
> But after step 3, when we run cmd "hdfs dfs -ls hdfs://Router2:8020/test1" 
> talking to another Router, it works well.
> From Router logs, I can see StateStoreZookeeperImpl and MountTableResolver 
> are updated correctly and in time. Not find the root case yet, still looking 
> into it.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-1686) Federation: Add more Balancer tests with federation setting

2018-03-01 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382512#comment-16382512
 ] 

Bharat Viswanadham commented on HDFS-1686:
--

Hi [~szetszwo]

Can you help in reviewing this jira?

 

> Federation: Add more Balancer tests with federation setting
> ---
>
> Key: HDFS-1686
> URL: https://issues.apache.org/jira/browse/HDFS-1686
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer  mover, test
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: 4358946.patch, HDFS-1686.00.patch, HDFS-1686.01.patch, 
> h1686_20110303.patch
>
>
> A test with 3 Namenodes and 4 Datanodes in startup, and then adding 2 new 
> Datanodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-5226) Trash::moveToTrash doesn't work across multiple namespace

2018-03-01 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382499#comment-16382499
 ] 

Bharat Viswanadham commented on HDFS-5226:
--

[~jrottinghuis]

Thank You for reporting this.

I think, it is very long time back opened the issue, do you remember any 
information on this issue?

Like are you calling the moveToTrash with PathObject which is fully qualified, 
like hdfs://mycluster:8020/<> or are you calling with simple path like 
new Path("/<>")?

 

 

> Trash::moveToTrash doesn't work across multiple namespace
> -
>
> Key: HDFS-5226
> URL: https://issues.apache.org/jira/browse/HDFS-5226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 2.0.5-alpha
>Reporter: Joep Rottinghuis
>Priority: Major
>
> Trash has introduced new static method moveToAppropriateTrash which resolves 
> to right filesystem. To be API compatible we need to check if 
> Trash::moveToTrash can do what moveToAppropriateTrash does so that downstream 
> users need not change code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12512) RBF: Add WebHDFS

2018-03-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16358972#comment-16358972
 ] 

Íñigo Goiri edited comment on HDFS-12512 at 3/1/18 6:16 PM:


It doesn't look like it run the tests either. We may have to wait for the bug 
bash.


was (Author: elgoiri):
It doesn't look like it run the tests either. We may ahve to wait for the bug 
bash.

> RBF: Add WebHDFS
> 
>
> Key: HDFS-12512
> URL: https://issues.apache.org/jira/browse/HDFS-12512
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Wei Yan
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-12512.000.patch, HDFS-12512.001.patch, 
> HDFS-12512.002.patch, HDFS-12512.003.patch, HDFS-12512.004.patch, 
> HDFS-12512.005.patch, HDFS-12512.006.patch, HDFS-12512.007.patch, 
> HDFS-12512.008.patch
>
>
> The Router currently does not support WebHDFS. It needs to implement 
> something similar to {{NamenodeWebHdfsMethods}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13215) RBF: Move Router to its own module

2018-03-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382416#comment-16382416
 ] 

Íñigo Goiri commented on HDFS-13215:


YARN itself has done a pretty good job at partitioning the project and making 
the code base more manageable.
Recently, in HDFS-12512 we had some issues running tests too; not sure if this 
would help there.
[~chris.douglas], [~aw], do you guys think this is useful at all? Good to have?
I should've done this from the start in HDFS-10467 but never thought about it...


> RBF: Move Router to its own module
> --
>
> Key: HDFS-13215
> URL: https://issues.apache.org/jira/browse/HDFS-13215
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Major
>
> We are splitting the HDFS client code base and potentially Router-based 
> Federation is also independent enough to be in its own package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13215) RBF: Move Router to its own module

2018-03-01 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382413#comment-16382413
 ] 

Wei Yan commented on HDFS-13215:


+1, so we unit tests will be much easier...

> RBF: Move Router to its own module
> --
>
> Key: HDFS-13215
> URL: https://issues.apache.org/jira/browse/HDFS-13215
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Major
>
> We are splitting the HDFS client code base and potentially Router-based 
> Federation is also independent enough to be in its own package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13215) RBF: Move Router to its own module

2018-03-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13215:
---
Summary: RBF: Move Router to its own module  (was: RBF: move Router to its 
own module)

> RBF: Move Router to its own module
> --
>
> Key: HDFS-13215
> URL: https://issues.apache.org/jira/browse/HDFS-13215
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Major
>
> We are splitting the HDFS client code base and potentially Router-based 
> Federation is also independent enough to be in its own package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13215) RBF: move Router to its own module

2018-03-01 Thread JIRA
Íñigo Goiri created HDFS-13215:
--

 Summary: RBF: move Router to its own module
 Key: HDFS-13215
 URL: https://issues.apache.org/jira/browse/HDFS-13215
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Íñigo Goiri


We are splitting the HDFS client code base and potentially Router-based 
Federation is also independent enough to be in its own package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13214) RBF: Configuration on Router conflicts with client side configuration

2018-03-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382402#comment-16382402
 ] 

Íñigo Goiri commented on HDFS-13214:


I think you can have a name service without HA and then that's how it gets 
specified.
That's the way it's defined in the [documentation for HDFS 
federation|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/Federation.html].

> RBF: Configuration on Router conflicts with client side configuration
> -
>
> Key: HDFS-13214
> URL: https://issues.apache.org/jira/browse/HDFS-13214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Tao Jie
>Priority: Major
>
> In a typical router-based federation cluster, hdfs-site.xml is supposed to be:
> {code}
> 
> dfs.nameservices
> ns1,ns2,ns-fed
>   
>   
> dfs.ha.namenodes.ns-fed
> r1,r2
>   
>   
> dfs.namenode.rpc-address.ns1
> host1:8020
>   
>   
> dfs.namenode.rpc-address.ns2
> host2:8020
>   
>   
> dfs.namenode.rpc-address.ns-fed.r1
> host1:
>   
>   
> dfs.namenode.rpc-address.ns-fed.r2
> host2:
>   
> {code}
> {{dfs.ha.namenodes.ns-fed}} here is used for client to access the Router. 
> However with this configuration on server node, Router fails to start with 
> error:
> {code}
> org.apache.hadoop.HadoopIllegalArgumentException: Configuration has multiple 
> addresses that match local node's address. Please configure the system with 
> dfs.nameservice.id and dfs.ha.namenode.id
> at org.apache.hadoop.hdfs.DFSUtil.getSuffixIDs(DFSUtil.java:1198)
> at org.apache.hadoop.hdfs.DFSUtil.getNameServiceId(DFSUtil.java:1131)
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNamenodeNameServiceId(DFSUtil.java:1086)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createLocalNamenodeHearbeatService(Router.java:466)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createNamenodeHearbeatServices(Router.java:423)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.serviceInit(Router.java:199)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter.main(DFSRouter.java:69)
> 2018-03-01 18:05:56,208 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter: Failed to start 
> router
> {code}
> Then the router tries to find the local namenode, multiple properties: 
> {{dfs.namenode.rpc-address.ns1}}, {{dfs.namenode.rpc-address.ns-fed.r1}} 
> match the local address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13195) DataNode conf page cannot display the current value after reconfig

2018-03-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382390#comment-16382390
 ] 

Íñigo Goiri commented on HDFS-13195:


Apparently, this part of code was introduced by HDFS-8572.
[~wheat9], do you mind taking a look to make sure we are not breaking something 
else?
I don't think there are any unit tests covering any of this; is there anything 
simple we can use to check this behavior?

> DataNode conf page  cannot display the current value after reconfig
> ---
>
> Key: HDFS-13195
> URL: https://issues.apache.org/jira/browse/HDFS-13195
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Minor
> Fix For: 2.7.1
>
> Attachments: HDFS-13195-branch-2.7.001.patch, HDFS-13195.001.patch
>
>
> Now the branch-2.7 support dfs.datanode.data.dir reconfig, but after i 
> reconfig this key, the conf page's value is still the old config value.
> The reason is that:
> {code:java}
> public DatanodeHttpServer(final Configuration conf,
>   final DataNode datanode,
>   final ServerSocketChannel externalHttpChannel)
> throws IOException {
> this.conf = conf;
> Configuration confForInfoServer = new Configuration(conf);
> confForInfoServer.setInt(HttpServer2.HTTP_MAX_THREADS, 10);
> HttpServer2.Builder builder = new HttpServer2.Builder()
> .setName("datanode")
> .setConf(confForInfoServer)
> .setACL(new AccessControlList(conf.get(DFS_ADMIN, " ")))
> .hostName(getHostnameForSpnegoPrincipal(confForInfoServer))
> .addEndpoint(URI.create("http://localhost:0;))
> .setFindPort(true);
> this.infoServer = builder.build();
> {code}
> The confForInfoServer is a new configuration instance, while the dfsadmin 
> reconfig the datanode's config, the config result cannot reflect to 
> confForInfoServer, so we should use the datanode's conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13214) RBF: Configuration on Router conflicts with client side configuration

2018-03-01 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382386#comment-16382386
 ] 

Wei Yan commented on HDFS-13214:



dfs.namenode.rpc-address.ns1
host1:8020
  
The field here should be "dfs.namenode.rpc-address.ns1.nn1", right?

> RBF: Configuration on Router conflicts with client side configuration
> -
>
> Key: HDFS-13214
> URL: https://issues.apache.org/jira/browse/HDFS-13214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Tao Jie
>Priority: Major
>
> In a typical router-based federation cluster, hdfs-site.xml is supposed to be:
> {code}
> 
> dfs.nameservices
> ns1,ns2,ns-fed
>   
>   
> dfs.ha.namenodes.ns-fed
> r1,r2
>   
>   
> dfs.namenode.rpc-address.ns1
> host1:8020
>   
>   
> dfs.namenode.rpc-address.ns2
> host2:8020
>   
>   
> dfs.namenode.rpc-address.ns-fed.r1
> host1:
>   
>   
> dfs.namenode.rpc-address.ns-fed.r2
> host2:
>   
> {code}
> {{dfs.ha.namenodes.ns-fed}} here is used for client to access the Router. 
> However with this configuration on server node, Router fails to start with 
> error:
> {code}
> org.apache.hadoop.HadoopIllegalArgumentException: Configuration has multiple 
> addresses that match local node's address. Please configure the system with 
> dfs.nameservice.id and dfs.ha.namenode.id
> at org.apache.hadoop.hdfs.DFSUtil.getSuffixIDs(DFSUtil.java:1198)
> at org.apache.hadoop.hdfs.DFSUtil.getNameServiceId(DFSUtil.java:1131)
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNamenodeNameServiceId(DFSUtil.java:1086)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createLocalNamenodeHearbeatService(Router.java:466)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createNamenodeHearbeatServices(Router.java:423)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.serviceInit(Router.java:199)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter.main(DFSRouter.java:69)
> 2018-03-01 18:05:56,208 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter: Failed to start 
> router
> {code}
> Then the router tries to find the local namenode, multiple properties: 
> {{dfs.namenode.rpc-address.ns1}}, {{dfs.namenode.rpc-address.ns-fed.r1}} 
> match the local address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13208) RBF: Mount path not available after ADD-REMOVE-ADD

2018-03-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382381#comment-16382381
 ] 

Íñigo Goiri commented on HDFS-13208:


bq. Tried HDFS-13212 this morning and the problem resolved. Let me double-check 
the code path and make sure they're the same issue.

Cool, I'm linking the issues for now, once we have the solution we should mark 
either one as duplicate.

> RBF: Mount path not available after ADD-REMOVE-ADD
> --
>
> Key: HDFS-13208
> URL: https://issues.apache.org/jira/browse/HDFS-13208
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Critical
>
> To reproduce this issue, run the following commands at Router 1:
> {code:java}
> $ hdfs dfsrouteradmin -add /test1 ns1 /ns1/test1
> $ hdfs dfsrouteradmin -rm /test1
> $ hdfs dfsrouteradmin -add /test1 ns1 /ns1/test1{code}
> "hdfs dfs -ls hdfs://Router1:8020/test1" works well after step 1. After step 
> 3 when we add /test1 back, Router 1 still returns "No such file or 
> directory". 
> But after step 3, when we run cmd "hdfs dfs -ls hdfs://Router2:8020/test1" 
> talking to another Router, it works well.
> From Router logs, I can see StateStoreZookeeperImpl and MountTableResolver 
> are updated correctly and in time. Not find the root case yet, still looking 
> into it.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13214) RBF: Configuration on Router conflicts with client side configuration

2018-03-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382378#comment-16382378
 ] 

Íñigo Goiri commented on HDFS-13214:


In our internal setup, we configure {{dfs.nameservice.id}}.
Does that fix the issue? If so, we could document this in 
{{HDFSRouterFederation.md}}.

[~ywskycn], [~linyiqun], have you experienced similar issues when setting this 
up?

> RBF: Configuration on Router conflicts with client side configuration
> -
>
> Key: HDFS-13214
> URL: https://issues.apache.org/jira/browse/HDFS-13214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Tao Jie
>Priority: Major
>
> In a typical router-based federation cluster, hdfs-site.xml is supposed to be:
> {code}
> 
> dfs.nameservices
> ns1,ns2,ns-fed
>   
>   
> dfs.ha.namenodes.ns-fed
> r1,r2
>   
>   
> dfs.namenode.rpc-address.ns1
> host1:8020
>   
>   
> dfs.namenode.rpc-address.ns2
> host2:8020
>   
>   
> dfs.namenode.rpc-address.ns-fed.r1
> host1:
>   
>   
> dfs.namenode.rpc-address.ns-fed.r2
> host2:
>   
> {code}
> {{dfs.ha.namenodes.ns-fed}} here is used for client to access the Router. 
> However with this configuration on server node, Router fails to start with 
> error:
> {code}
> org.apache.hadoop.HadoopIllegalArgumentException: Configuration has multiple 
> addresses that match local node's address. Please configure the system with 
> dfs.nameservice.id and dfs.ha.namenode.id
> at org.apache.hadoop.hdfs.DFSUtil.getSuffixIDs(DFSUtil.java:1198)
> at org.apache.hadoop.hdfs.DFSUtil.getNameServiceId(DFSUtil.java:1131)
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNamenodeNameServiceId(DFSUtil.java:1086)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createLocalNamenodeHearbeatService(Router.java:466)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createNamenodeHearbeatServices(Router.java:423)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.serviceInit(Router.java:199)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter.main(DFSRouter.java:69)
> 2018-03-01 18:05:56,208 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter: Failed to start 
> router
> {code}
> Then the router tries to find the local namenode, multiple properties: 
> {{dfs.namenode.rpc-address.ns1}}, {{dfs.namenode.rpc-address.ns-fed.r1}} 
> match the local address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13202) Fix the outdated javadocs in HAUtil

2018-03-01 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382376#comment-16382376
 ] 

Chao Sun commented on HDFS-13202:
-

Thanks [~linyiqun] for reviewing and committing the patch!

> Fix the outdated javadocs in HAUtil
> ---
>
> Key: HDFS-13202
> URL: https://issues.apache.org/jira/browse/HDFS-13202
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Trivial
> Fix For: 3.2.0
>
> Attachments: HDFS-13202.000.patch
>
>
> There are a few outdated javadocs in {{HAUtil}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13208) RBF: Mount path not available after ADD-REMOVE-ADD

2018-03-01 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382375#comment-16382375
 ] 

Wei Yan commented on HDFS-13208:


Tried HDFS-13212 this morning and the problem resolved. Let me double-check the 
code path and make sure they're the same issue.

> RBF: Mount path not available after ADD-REMOVE-ADD
> --
>
> Key: HDFS-13208
> URL: https://issues.apache.org/jira/browse/HDFS-13208
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Critical
>
> To reproduce this issue, run the following commands at Router 1:
> {code:java}
> $ hdfs dfsrouteradmin -add /test1 ns1 /ns1/test1
> $ hdfs dfsrouteradmin -rm /test1
> $ hdfs dfsrouteradmin -add /test1 ns1 /ns1/test1{code}
> "hdfs dfs -ls hdfs://Router1:8020/test1" works well after step 1. After step 
> 3 when we add /test1 back, Router 1 still returns "No such file or 
> directory". 
> But after step 3, when we run cmd "hdfs dfs -ls hdfs://Router2:8020/test1" 
> talking to another Router, it works well.
> From Router logs, I can see StateStoreZookeeperImpl and MountTableResolver 
> are updated correctly and in time. Not find the root case yet, still looking 
> into it.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13214) RBF: Configuration on Router conflicts with client side configuration

2018-03-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13214:
---
Issue Type: Sub-task  (was: Bug)
Parent: HDFS-12615

> RBF: Configuration on Router conflicts with client side configuration
> -
>
> Key: HDFS-13214
> URL: https://issues.apache.org/jira/browse/HDFS-13214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Tao Jie
>Priority: Major
>
> In a typical router-based federation cluster, hdfs-site.xml is supposed to be:
> {code}
> 
> dfs.nameservices
> ns1,ns2,ns-fed
>   
>   
> dfs.ha.namenodes.ns-fed
> r1,r2
>   
>   
> dfs.namenode.rpc-address.ns1
> host1:8020
>   
>   
> dfs.namenode.rpc-address.ns2
> host2:8020
>   
>   
> dfs.namenode.rpc-address.ns-fed.r1
> host1:
>   
>   
> dfs.namenode.rpc-address.ns-fed.r2
> host2:
>   
> {code}
> {{dfs.ha.namenodes.ns-fed}} here is used for client to access the Router. 
> However with this configuration on server node, Router fails to start with 
> error:
> {code}
> org.apache.hadoop.HadoopIllegalArgumentException: Configuration has multiple 
> addresses that match local node's address. Please configure the system with 
> dfs.nameservice.id and dfs.ha.namenode.id
> at org.apache.hadoop.hdfs.DFSUtil.getSuffixIDs(DFSUtil.java:1198)
> at org.apache.hadoop.hdfs.DFSUtil.getNameServiceId(DFSUtil.java:1131)
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNamenodeNameServiceId(DFSUtil.java:1086)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createLocalNamenodeHearbeatService(Router.java:466)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createNamenodeHearbeatServices(Router.java:423)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.serviceInit(Router.java:199)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter.main(DFSRouter.java:69)
> 2018-03-01 18:05:56,208 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter: Failed to start 
> router
> {code}
> Then the router tries to find the local namenode, multiple properties: 
> {{dfs.namenode.rpc-address.ns1}}, {{dfs.namenode.rpc-address.ns-fed.r1}} 
> match the local address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13204) RBF: Optimize name service safe mode icon

2018-03-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382371#comment-16382371
 ] 

Íñigo Goiri commented on HDFS-13204:


[~raviprak], can you provide some feedback?
My proposal currently is to do what  [^HDFS-13204.001.patch] does but make the 
Router safe mode active to be the dfshealth-node-down-decommissioned.
To summarize, my proposal is:
* Nameservices
** Active: dfshealth-node-alive
** Standby: dfshealth-node-down-decommissioned
** Safe mode: dfshealth-node-decommissioned
** Unavailable: dfshealth-node-down
* Namenodes
** Active: dfshealth-node-alive
** Standby: dfshealth-node-down-decommissioned
** Safe mode: dfshealth-node-decommissioned
** Unavailable: dfshealth-node-down
* Routers
** Active: dfshealth-node-alive
** Safe mode: dfshealth-node-down-decommissioned
** Unavailable: dfshealth-node-down

> RBF: Optimize name service safe mode icon
> -
>
> Key: HDFS-13204
> URL: https://issues.apache.org/jira/browse/HDFS-13204
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: liuhongtong
>Priority: Minor
> Attachments: HDFS-13204.001.patch, image-2018-02-28-18-33-09-972.png, 
> image-2018-02-28-18-33-47-661.png, image-2018-02-28-18-35-35-708.png
>
>
> In federation health webpage, the safe mode icons of Subclusters and Routers 
> are inconsistent.
> The safe mode icon of Subclusters may induce users the name service is 
> maintaining.
> !image-2018-02-28-18-33-09-972.png!
> The safe mode icon of Routers:
> !image-2018-02-28-18-33-47-661.png!
> In fact, if the name service is in safe mode, users can't do writing related 
> operations. So I think the safe mode icon in Subclusters should be modified, 
> which may be more reasonable.
> !image-2018-02-28-18-35-35-708.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13208) RBF: Mount path not available after ADD-REMOVE-ADD

2018-03-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382364#comment-16382364
 ] 

Íñigo Goiri commented on HDFS-13208:


It might be something related to the cache.
They just reported HDFS-13212 and we had HDFS-12988.
[~ywskycn], can you reproduce it with a unit test similar to what [~linyiqun] 
posted?

> RBF: Mount path not available after ADD-REMOVE-ADD
> --
>
> Key: HDFS-13208
> URL: https://issues.apache.org/jira/browse/HDFS-13208
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Critical
>
> To reproduce this issue, run the following commands at Router 1:
> {code:java}
> $ hdfs dfsrouteradmin -add /test1 ns1 /ns1/test1
> $ hdfs dfsrouteradmin -rm /test1
> $ hdfs dfsrouteradmin -add /test1 ns1 /ns1/test1{code}
> "hdfs dfs -ls hdfs://Router1:8020/test1" works well after step 1. After step 
> 3 when we add /test1 back, Router 1 still returns "No such file or 
> directory". 
> But after step 3, when we run cmd "hdfs dfs -ls hdfs://Router2:8020/test1" 
> talking to another Router, it works well.
> From Router logs, I can see StateStoreZookeeperImpl and MountTableResolver 
> are updated correctly and in time. Not find the root case yet, still looking 
> into it.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13212) RBF: Fix router location cache issue

2018-03-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382357#comment-16382357
 ] 

Íñigo Goiri commented on HDFS-13212:


[~wuweiwei], thanks for catching this, we had a similar issue in HDFS-12988.
As [~linyiqun] mentioned, you should add a unit test for  
[^HDFS-13212-001.patch].
You can use the one added in HDFS-12988 as a reference or probably modify it to 
cover this case.

> RBF: Fix router location cache issue
> 
>
> Key: HDFS-13212
> URL: https://issues.apache.org/jira/browse/HDFS-13212
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Reporter: Weiwei Wu
>Priority: Major
> Attachments: HDFS-13212-001.patch
>
>
> The MountTableResolver refreshEntries function have a bug when add a new 
> mount table entry which already have location cache. The old location cache 
> will never be invalid until this mount point change again.
> Need to invalid the location cache when add the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >