[jira] [Commented] (HDFS-11418) HttpFS should support old SSL clients

2017-02-27 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15887459#comment-15887459
 ] 

John Zhuge commented on HDFS-11418:
---

Fixing the similar issue to HADOOP-14131.

> HttpFS should support old SSL clients
> -
>
> Key: HDFS-11418
> URL: https://issues.apache.org/jira/browse/HDFS-11418
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.8.0, 2.7.4, 2.6.6
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-11418.branch-2.001.patch, 
> HDFS-11418.branch-2.002.patch
>
>
> HADOOP-13812 upgraded Tomcat to 6.0.48 which filters weak ciphers. Old SSL 
> clients such as curl stop working. The symptom is {{NSS error -12286}} when 
> running {{curl -v}}.
> Instead of forcing the SSL clients to upgrade, we can configure Tomcat to 
> explicitly allow enough weak ciphers so that old SSL clients can work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11418) HttpFS should support old SSL clients

2017-02-27 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11418:
--
Status: Open  (was: Patch Available)

> HttpFS should support old SSL clients
> -
>
> Key: HDFS-11418
> URL: https://issues.apache.org/jira/browse/HDFS-11418
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.8.0, 2.7.4, 2.6.6
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-11418.branch-2.001.patch, 
> HDFS-11418.branch-2.002.patch
>
>
> HADOOP-13812 upgraded Tomcat to 6.0.48 which filters weak ciphers. Old SSL 
> clients such as curl stop working. The symptom is {{NSS error -12286}} when 
> running {{curl -v}}.
> Instead of forcing the SSL clients to upgrade, we can configure Tomcat to 
> explicitly allow enough weak ciphers so that old SSL clients can work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11338) [SPS]: Fix timeout issue in unit tests caused by longger NN down time

2017-02-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15887272#comment-15887272
 ] 

Hadoop QA commented on HDFS-11338:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
39s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}154m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11338 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855040/HDFS-11338-HDFS-10285.01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 74fe29137db2 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 

[jira] [Commented] (HDFS-9868) Add ability for DistCp to run between 2 clusters

2017-02-27 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15887273#comment-15887273
 ] 

Yongjun Zhang commented on HDFS-9868:
-

Thanks [~xiaochen]. I just committed HADOOP-14127.

Found a better way to distribute the conf files with DistributedCache, we could 
use
{code}
public void addCacheArchive(URI uri)
{code}
, if we create a tar file out of the conf dir, and use this api to send the tar 
file to the distributed cache, then the same tarred dir hierarchy will be 
extracted and available at the current working directory. 

See 
http://hadoop.apache.org/docs/stable/api/org/apache/hadoop/filecache/DistributedCache.html

  

> Add ability for DistCp to run between 2 clusters
> 
>
> Key: HDFS-9868
> URL: https://issues.apache.org/jira/browse/HDFS-9868
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp
>Affects Versions: 2.7.1
>Reporter: NING DING
>Assignee: NING DING
> Attachments: HDFS-9868.05.patch, HDFS-9868.06.patch, 
> HDFS-9868.07.patch, HDFS-9868.08.patch, HDFS-9868.09.patch, 
> HDFS-9868.10.patch, HDFS-9868.1.patch, HDFS-9868.2.patch, HDFS-9868.3.patch, 
> HDFS-9868.4.patch
>
>
> Normally the HDFS cluster is HA enabled. It could take a long time when 
> coping huge data by distp. If the source cluster changes active namenode, the 
> distp will run failed. This patch supports the DistCp can read source cluster 
> files in HA access mode. A source cluster configuration file needs to be 
> specified (via the -sourceClusterConf option).
>   The following is an example of the contents of a source cluster 
> configuration
>   file:
> {code:xml}
> 
>   
>   fs.defaultFS
>   hdfs://mycluster
> 
> 
>   dfs.nameservices
>   mycluster
> 
> 
>   dfs.ha.namenodes.mycluster
>   nn1,nn2
> 
> 
>   dfs.namenode.rpc-address.mycluster.nn1
>   host1:9000
> 
> 
>   dfs.namenode.rpc-address.mycluster.nn2
>   host2:9000
> 
> 
>   dfs.namenode.http-address.mycluster.nn1
>   host1:50070
> 
> 
>   dfs.namenode.http-address.mycluster.nn2
>   host2:50070
> 
> 
>   dfs.client.failover.proxy.provider.mycluster
>   
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> 
>   
> {code}
>   The invocation of DistCp is as below:
> {code}
> bash$ hadoop distcp -sourceClusterConf sourceCluster.xml /foo/bar 
> hdfs://nn2:8020/bar/foo
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8741) Proper error msg to be printed when invalid operation type is given to WebHDFS operations.

2017-02-27 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15887244#comment-15887244
 ] 

Akira Ajisaka commented on HDFS-8741:
-

Hi [~surendrasingh], thank you for providing a patch. Mostly looks good to me.
{code}
throw new IllegalArgumentException(str + " not a valid " + Type.GET
  + " operation.");
{code}
(minor nit) Would you add "is" between str and not?
I'm +1 if that is addressed.

> Proper error msg to be printed when invalid operation type is given to 
> WebHDFS operations.
> --
>
> Key: HDFS-8741
> URL: https://issues.apache.org/jira/browse/HDFS-8741
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: Archana T
>Assignee: Surendra Singh Lilhore
>Priority: Minor
> Attachments: HDFS-8741.001.patch, HDFS-8741.002.patch
>
>
> When wrong operation type is given to WebHDFS operations, following Error 
> message is printed --
> For ex: CREATE is called with GET instead of PUT--
> HTTP/1.1 400 Bad Request
> ..
> {"RemoteException":{"exception":"IllegalArgumentException","javaClassName":"java.lang.IllegalArgumentException","message":"Invalid
>  value for webhdfs parameter \"op\": {color:red}No enum constant 
> org.apache.hadoop.hdfs.web.resources.PutOpParam.Op.CREATE"}}{color}
> Expected--
> Valid Error message to be printed



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11467) Support ErasureCodingPolicyManager section in OIV XML/ReverseXML and OEV tools

2017-02-27 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-11467:
---
Summary: Support ErasureCodingPolicyManager section in OIV XML/ReverseXML 
and OEV tools  (was: Add ErasureCodingPolicyManager section in OIV 
XML/ReverseXML and OEV tools)

> Support ErasureCodingPolicyManager section in OIV XML/ReverseXML and OEV tools
> --
>
> Key: HDFS-11467
> URL: https://issues.apache.org/jira/browse/HDFS-11467
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Wei-Chiu Chuang
>
> As discussed in HDFS-7859, after ErasureCodingPolicyManager section is added 
> into fsimage, we would like to also support exporting this section into an 
> XML back and forth using the OIV tool.
> Likewise, HDFS-7859 adds new edit log ops, so OEV tool should also support it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2017-02-27 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15887096#comment-15887096
 ] 

Wei-Chiu Chuang commented on HDFS-7859:
---

Filed HDFS-11467 for OIV/OEV improvement.

> Erasure Coding: Persist erasure coding policies in NameNode
> ---
>
> Key: HDFS-7859
> URL: https://issues.apache.org/jira/browse/HDFS-7859
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
> Attachments: HDFS-7859.001.patch, HDFS-7859.002.patch, 
> HDFS-7859.004.patch, HDFS-7859.005.patch, HDFS-7859.006.patch, 
> HDFS-7859.007.patch, HDFS-7859.008.patch, HDFS-7859.009.patch, 
> HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.002.patch, 
> HDFS-7859-HDFS-7285.003.patch
>
>
> In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we 
> persist EC schemas in NameNode centrally and reliably, so that EC zones can 
> reference them by name efficiently.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11467) Add ErasureCodingPolicyManager section in OIV XML/ReverseXML and OEV tools

2017-02-27 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-11467:
--

 Summary: Add ErasureCodingPolicyManager section in OIV 
XML/ReverseXML and OEV tools
 Key: HDFS-11467
 URL: https://issues.apache.org/jira/browse/HDFS-11467
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 3.0.0-alpha3
Reporter: Wei-Chiu Chuang


As discussed in HDFS-7859, after ErasureCodingPolicyManager section is added 
into fsimage, we would like to also support exporting this section into an XML 
back and forth using the OIV tool.

Likewise, HDFS-7859 adds new edit log ops, so OEV tool should also support it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11338) [SPS]: Fix timeout issue in unit tests caused by longger NN down time

2017-02-27 Thread Wei Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhou updated HDFS-11338:

Attachment: HDFS-11338-HDFS-10285.01.patch

Fix TestPersistentStoragePolicySatisfier timeout issue.

> [SPS]: Fix timeout issue in unit tests caused by longger NN down time
> -
>
> Key: HDFS-11338
> URL: https://issues.apache.org/jira/browse/HDFS-11338
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Wei Zhou
>Assignee: Wei Zhou
> Attachments: HDFS-11338-HDFS-10285.00.patch, 
> HDFS-11338-HDFS-10285.01.patch
>
>
> As discussed in HDFS-11186, it takes longer to stop NN:
> {code}
> try {
>   storagePolicySatisfierThread.join(3000);
> } catch (InterruptedException ie) {
> }
> {code}
> So, it takes longer time to finish some tests and this leads to the timeout 
> failures.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2017-02-27 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15887076#comment-15887076
 ] 

Wei-Chiu Chuang commented on HDFS-7859:
---

Sure. I'll file OIV/OEV improvement jiras.

> Erasure Coding: Persist erasure coding policies in NameNode
> ---
>
> Key: HDFS-7859
> URL: https://issues.apache.org/jira/browse/HDFS-7859
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
> Attachments: HDFS-7859.001.patch, HDFS-7859.002.patch, 
> HDFS-7859.004.patch, HDFS-7859.005.patch, HDFS-7859.006.patch, 
> HDFS-7859.007.patch, HDFS-7859.008.patch, HDFS-7859.009.patch, 
> HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.002.patch, 
> HDFS-7859-HDFS-7285.003.patch
>
>
> In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we 
> persist EC schemas in NameNode centrally and reliably, so that EC zones can 
> reference them by name efficiently.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11428) Change setErasureCodingPolicy to take a required string EC policy name

2017-02-27 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15887074#comment-15887074
 ] 

Wei-Chiu Chuang commented on HDFS-11428:


Sorry that was not clear. +1 from me.

> Change setErasureCodingPolicy to take a required string EC policy name
> --
>
> Key: HDFS-11428
> URL: https://issues.apache.org/jira/browse/HDFS-11428
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11428.001.patch, HDFS-11428.002.patch, 
> HDFS-11428.003.patch, HDFS-11428.004.patch
>
>
> The current {{setErasureCodingPolicy}} API takes an optional {{ECPolicy}}. 
> This makes calling the API harder for clients, since they need to turn a 
> specified name into a policy, and the set of available EC policies is only 
> available on the NN.
> You can see this awkwardness in the current EC cli set command: it first 
> fetches the list of EC policies, looks for the one specified by the user, 
> then calls set. This means we need to issue two RPCs for every set 
> (inefficient), and we need to do validation on the NN side anyway (extraneous 
> work).
> Since we're phasing out the system default EC policy, it also makes sense to 
> make the policy a required parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11382) Persist Erasure Coding Policy ID in a new optional field in INodeFile in FSImage

2017-02-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15887062#comment-15887062
 ] 

Hadoop QA commented on HDFS-11382:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
6s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
54s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 63m 
48s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11382 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855021/HDFS-11382.05.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  cc  |
| uname | Linux 748cd2698e25 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5f5b031 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18462/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18462/console |
| Powered 

[jira] [Commented] (HDFS-11450) HDFS specific network topology classes with storage type info included

2017-02-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15887046#comment-15887046
 ] 

Hadoop QA commented on HDFS-11450:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 24m  
3s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
13s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}192m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11450 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12854998/HDFS-11450.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c779787f8fec 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5f5b031 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18460/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18460/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18460/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was 

[jira] [Commented] (HDFS-11466) Change dfs.namenode.write-lock-reporting-threshold-ms default from 1000ms to 5000ms

2017-02-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15887043#comment-15887043
 ] 

Hadoop QA commented on HDFS-11466:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11466 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855022/HDFS-11466.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 3713ab1f93f0 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5f5b031 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18463/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18463/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18463/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Change dfs.namenode.write-lock-reporting-threshold-ms default from 1000ms to 
> 5000ms
> 

[jira] [Commented] (HDFS-11382) Persist Erasure Coding Policy ID in a new optional field in INodeFile in FSImage

2017-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15887022#comment-15887022
 ] 

Hudson commented on HDFS-11382:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11314 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11314/])
HDFS-11382. Persist Erasure Coding Policy ID in a new optional field in (wang: 
rev 55c07bbed2f475f7b584a86112ee1b6fe0221e98)
* (edit) hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/proto/fsimage.proto
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicies.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStripedINodeFile.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerWithStripedBlocks.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java


> Persist Erasure Coding Policy ID in a new optional field in INodeFile in 
> FSImage
> 
>
> Key: HDFS-11382
> URL: https://issues.apache.org/jira/browse/HDFS-11382
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11382.01.patch, HDFS-11382.02.patch, 
> HDFS-11382.03.patch, HDFS-11382.04.patch, HDFS-11382.05.patch
>
>
> For Erasure Coded files, replication field in INodeFile message is re-used  
> to store the EC Policy ID. 
> *FSDirWriteFileOp#addFile*
> {noformat}
>   private static INodesInPath addFile(
>   FSDirectory fsd, INodesInPath existing, byte[] localName,
>   PermissionStatus permissions, short replication, long 
> preferredBlockSize,
>   String clientName, String clientMachine)
>   throws IOException {
> .. .. ..
> try {
>   ErasureCodingPolicy ecPolicy = FSDirErasureCodingOp.
>   getErasureCodingPolicy(fsd.getFSNamesystem(), existing);
>   if (ecPolicy != null) {
> replication = ecPolicy.getId();   <===
>   }
>   final BlockType blockType = ecPolicy != null?
>   BlockType.STRIPED : BlockType.CONTIGUOUS;
>   INodeFile newNode = newINodeFile(fsd.allocateNewInodeId(), permissions,
>   modTime, modTime, replication, preferredBlockSize, blockType);
>   newNode.setLocalName(localName);
>   newNode.toUnderConstruction(clientName, clientMachine);
>   newiip = fsd.addINode(existing, newNode, permissions.getPermission());
> {noformat}
> With HDFS-11268 fix, {{FSImageFormatPBINode#Loader#loadInodeFile}} is rightly 
> getting the EC ID from the replication field and then uses the right Policy 
> to construct the blocks.
> *FSImageFormatPBINode#Loader#loadInodeFile*
> {noformat}
>   ErasureCodingPolicy ecPolicy = (blockType == BlockType.STRIPED) ?
>   ErasureCodingPolicyManager.getPolicyByPolicyID((byte) replication) :
>   null;
> {noformat}
> The original intention was to re-use the replication field so the in-memory 
> representation would be compact. But, this isn't necessary for the on-disk 
> representation. replication is an optional field, and if we add another 
> optional field for the EC policy, it won't be any extra space.
> Also, we need to make sure to 

[jira] [Commented] (HDFS-8672) Erasure Coding: Add EC-related Metrics to NN (seperate striped blocks count from UnderReplicatedBlocks count)

2017-02-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15887018#comment-15887018
 ] 

Andrew Wang commented on HDFS-8672:
---

Hi [~walter.k.su] do you mind if [~manojg] takes this over? Manoj put up a 
proposal on HDFS-10999 for overhauling the metrics that relates to the goals of 
this JIRA.

> Erasure Coding: Add EC-related Metrics to NN (seperate striped blocks count 
> from UnderReplicatedBlocks count)
> -
>
> Key: HDFS-8672
> URL: https://issues.apache.org/jira/browse/HDFS-8672
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
>  Labels: hdfs-ec-3.0-nice-to-have
>
> 1. {{MissingBlocks}} metric is updated in HDFS-8461 so it includes striped 
> blocks.
> 2. {{CorruptBlocks}} metric is updated in HDFS-8619 so it includes striped 
> blocks.
> 3. {{UnderReplicatedBlocks}} and {{PendingReplicationBlocks}} includes 
> striped blocks (HDFS-7912).
> This jira aims to seperate striped blocks count from 
> {{UnderReplicatedBlocks}} count.
> EC file recovery need coding. It's more expensive than block duplication.
> It's necessary to seperate striped blocks count from UnderReplicatedBlocks 
> count. So user can know what's going on.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8140) ECSchema supports for offline EditsVisitor over an OEV XML file

2017-02-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-8140:
--
Labels:   (was: hdfs-ec-3.0-nice-to-have)

> ECSchema supports for offline EditsVisitor over an OEV XML file
> ---
>
> Key: HDFS-8140
> URL: https://issues.apache.org/jira/browse/HDFS-8140
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Xinwei Qin 
>Assignee: Xinwei Qin 
>
> Make the ECSchema info in Editlog Support for offline EditsVistor over an OEV 
> XML file, which is not implemented in HDFS-7859.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8295) Add MODIFY and REMOVE ECSchema editlog operations

2017-02-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-8295:
--
Labels:   (was: hdfs-ec-3.0-nice-to-have)

> Add MODIFY and REMOVE ECSchema editlog operations
> -
>
> Key: HDFS-8295
> URL: https://issues.apache.org/jira/browse/HDFS-8295
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xinwei Qin 
>Assignee: Xinwei Qin 
> Attachments: HDFS-8295.001.patch
>
>
> If MODIFY and REMOVE ECSchema operations are supported, then add these 
> editlog operations to persist them. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11382) Persist Erasure Coding Policy ID in a new optional field in INodeFile in FSImage

2017-02-27 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886992#comment-15886992
 ] 

Manoj Govindassamy commented on HDFS-11382:
---

Thanks for the review and commit help [~andrew.wang], [~ehiggs]. Updated 
Release Notes for this incompatible change. 

> Persist Erasure Coding Policy ID in a new optional field in INodeFile in 
> FSImage
> 
>
> Key: HDFS-11382
> URL: https://issues.apache.org/jira/browse/HDFS-11382
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11382.01.patch, HDFS-11382.02.patch, 
> HDFS-11382.03.patch, HDFS-11382.04.patch, HDFS-11382.05.patch
>
>
> For Erasure Coded files, replication field in INodeFile message is re-used  
> to store the EC Policy ID. 
> *FSDirWriteFileOp#addFile*
> {noformat}
>   private static INodesInPath addFile(
>   FSDirectory fsd, INodesInPath existing, byte[] localName,
>   PermissionStatus permissions, short replication, long 
> preferredBlockSize,
>   String clientName, String clientMachine)
>   throws IOException {
> .. .. ..
> try {
>   ErasureCodingPolicy ecPolicy = FSDirErasureCodingOp.
>   getErasureCodingPolicy(fsd.getFSNamesystem(), existing);
>   if (ecPolicy != null) {
> replication = ecPolicy.getId();   <===
>   }
>   final BlockType blockType = ecPolicy != null?
>   BlockType.STRIPED : BlockType.CONTIGUOUS;
>   INodeFile newNode = newINodeFile(fsd.allocateNewInodeId(), permissions,
>   modTime, modTime, replication, preferredBlockSize, blockType);
>   newNode.setLocalName(localName);
>   newNode.toUnderConstruction(clientName, clientMachine);
>   newiip = fsd.addINode(existing, newNode, permissions.getPermission());
> {noformat}
> With HDFS-11268 fix, {{FSImageFormatPBINode#Loader#loadInodeFile}} is rightly 
> getting the EC ID from the replication field and then uses the right Policy 
> to construct the blocks.
> *FSImageFormatPBINode#Loader#loadInodeFile*
> {noformat}
>   ErasureCodingPolicy ecPolicy = (blockType == BlockType.STRIPED) ?
>   ErasureCodingPolicyManager.getPolicyByPolicyID((byte) replication) :
>   null;
> {noformat}
> The original intention was to re-use the replication field so the in-memory 
> representation would be compact. But, this isn't necessary for the on-disk 
> representation. replication is an optional field, and if we add another 
> optional field for the EC policy, it won't be any extra space.
> Also, we need to make sure to have the appropriate asserts in place to make 
> sure both fields aren't set for the same INodeField.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11382) Persist Erasure Coding Policy ID in a new optional field in INodeFile in FSImage

2017-02-27 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11382:
--
Release Note: The FSImage on-disk format for INodeFile is changed to 
additionally include a field for Erasure Coded files. This optional field 
'erasureCodingPolicyID' which is unit32 type is available for all Erasure Coded 
files and represents the Erasure Coding Policy ID. Previously, the 
'replication' field in INodeFile disk format was overloaded  to represent the 
same Erasure Coding Policy ID.

> Persist Erasure Coding Policy ID in a new optional field in INodeFile in 
> FSImage
> 
>
> Key: HDFS-11382
> URL: https://issues.apache.org/jira/browse/HDFS-11382
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11382.01.patch, HDFS-11382.02.patch, 
> HDFS-11382.03.patch, HDFS-11382.04.patch, HDFS-11382.05.patch
>
>
> For Erasure Coded files, replication field in INodeFile message is re-used  
> to store the EC Policy ID. 
> *FSDirWriteFileOp#addFile*
> {noformat}
>   private static INodesInPath addFile(
>   FSDirectory fsd, INodesInPath existing, byte[] localName,
>   PermissionStatus permissions, short replication, long 
> preferredBlockSize,
>   String clientName, String clientMachine)
>   throws IOException {
> .. .. ..
> try {
>   ErasureCodingPolicy ecPolicy = FSDirErasureCodingOp.
>   getErasureCodingPolicy(fsd.getFSNamesystem(), existing);
>   if (ecPolicy != null) {
> replication = ecPolicy.getId();   <===
>   }
>   final BlockType blockType = ecPolicy != null?
>   BlockType.STRIPED : BlockType.CONTIGUOUS;
>   INodeFile newNode = newINodeFile(fsd.allocateNewInodeId(), permissions,
>   modTime, modTime, replication, preferredBlockSize, blockType);
>   newNode.setLocalName(localName);
>   newNode.toUnderConstruction(clientName, clientMachine);
>   newiip = fsd.addINode(existing, newNode, permissions.getPermission());
> {noformat}
> With HDFS-11268 fix, {{FSImageFormatPBINode#Loader#loadInodeFile}} is rightly 
> getting the EC ID from the replication field and then uses the right Policy 
> to construct the blocks.
> *FSImageFormatPBINode#Loader#loadInodeFile*
> {noformat}
>   ErasureCodingPolicy ecPolicy = (blockType == BlockType.STRIPED) ?
>   ErasureCodingPolicyManager.getPolicyByPolicyID((byte) replication) :
>   null;
> {noformat}
> The original intention was to re-use the replication field so the in-memory 
> representation would be compact. But, this isn't necessary for the on-disk 
> representation. replication is an optional field, and if we add another 
> optional field for the EC policy, it won't be any extra space.
> Also, we need to make sure to have the appropriate asserts in place to make 
> sure both fields aren't set for the same INodeField.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11466) Change dfs.namenode.write-lock-reporting-threshold-ms default from 1000ms to 5000ms

2017-02-27 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886988#comment-15886988
 ] 

Junping Du commented on HDFS-11466:
---

Thanks [~andrew.wang] for pinging me on this. Yes. 2.8.0 is pending on 
HADOOP-13866 decision which should get resolved soon. About patch here, if it 
is really important (seems unlikely at first glance), we can get it to 2.8.0. 
Otherwise, my suggestion is better to leave it to 2.8.1.

> Change dfs.namenode.write-lock-reporting-threshold-ms default from 1000ms to 
> 5000ms
> ---
>
> Key: HDFS-11466
> URL: https://issues.apache.org/jira/browse/HDFS-11466
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-11466.001.patch
>
>
> Per discussion on HDFS-10798, it might make sense to change the default value 
> for long write lock holds to 5000ms like the read threshold, to avoid 
> spamming the log.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11382) Persist Erasure Coding Policy ID in a new optional field in INodeFile in FSImage

2017-02-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11382:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha3
   Status: Resolved  (was: Patch Available)

Committed to trunk, thanks for the contribution Manoj! Do you mind also adding 
a release note for this change, as it's incompatible?

> Persist Erasure Coding Policy ID in a new optional field in INodeFile in 
> FSImage
> 
>
> Key: HDFS-11382
> URL: https://issues.apache.org/jira/browse/HDFS-11382
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11382.01.patch, HDFS-11382.02.patch, 
> HDFS-11382.03.patch, HDFS-11382.04.patch, HDFS-11382.05.patch
>
>
> For Erasure Coded files, replication field in INodeFile message is re-used  
> to store the EC Policy ID. 
> *FSDirWriteFileOp#addFile*
> {noformat}
>   private static INodesInPath addFile(
>   FSDirectory fsd, INodesInPath existing, byte[] localName,
>   PermissionStatus permissions, short replication, long 
> preferredBlockSize,
>   String clientName, String clientMachine)
>   throws IOException {
> .. .. ..
> try {
>   ErasureCodingPolicy ecPolicy = FSDirErasureCodingOp.
>   getErasureCodingPolicy(fsd.getFSNamesystem(), existing);
>   if (ecPolicy != null) {
> replication = ecPolicy.getId();   <===
>   }
>   final BlockType blockType = ecPolicy != null?
>   BlockType.STRIPED : BlockType.CONTIGUOUS;
>   INodeFile newNode = newINodeFile(fsd.allocateNewInodeId(), permissions,
>   modTime, modTime, replication, preferredBlockSize, blockType);
>   newNode.setLocalName(localName);
>   newNode.toUnderConstruction(clientName, clientMachine);
>   newiip = fsd.addINode(existing, newNode, permissions.getPermission());
> {noformat}
> With HDFS-11268 fix, {{FSImageFormatPBINode#Loader#loadInodeFile}} is rightly 
> getting the EC ID from the replication field and then uses the right Policy 
> to construct the blocks.
> *FSImageFormatPBINode#Loader#loadInodeFile*
> {noformat}
>   ErasureCodingPolicy ecPolicy = (blockType == BlockType.STRIPED) ?
>   ErasureCodingPolicyManager.getPolicyByPolicyID((byte) replication) :
>   null;
> {noformat}
> The original intention was to re-use the replication field so the in-memory 
> representation would be compact. But, this isn't necessary for the on-disk 
> representation. replication is an optional field, and if we add another 
> optional field for the EC policy, it won't be any extra space.
> Also, we need to make sure to have the appropriate asserts in place to make 
> sure both fields aren't set for the same INodeField.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11451) Ozone: Add protobuf definitions for container reports

2017-02-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886973#comment-15886973
 ] 

Hadoop QA commented on HDFS-11451:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
42s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
16s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
8s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 9 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 1 unchanged - 0 fixed = 3 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
19s{color} | {color:red} The patch generated 11 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
|   | hadoop.hdfs.tools.TestDelegationTokenFetcher |
|   | hadoop.ozone.container.common.TestDatanodeStateMachine |
|   | hadoop.ozone.web.TestOzoneWebAccess |
|   | hadoop.hdfs.server.namenode.TestStartup |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-11451 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855010/HDFS-11451-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 632a1085227b 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / d63ec0c |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18461/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18461/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Commented] (HDFS-11466) Change dfs.namenode.write-lock-reporting-threshold-ms default from 1000ms to 5000ms

2017-02-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886972#comment-15886972
 ] 

Andrew Wang commented on HDFS-11466:


Hi Zhe, so far it hasn't been released, though 2.8.0 is close to release. I 
think there's still time to sneak in this simple change if you think it's 
important to get this done before it gets released, though I don't consider 
this change incompatible.

Ping [~djp] for awareness.

> Change dfs.namenode.write-lock-reporting-threshold-ms default from 1000ms to 
> 5000ms
> ---
>
> Key: HDFS-11466
> URL: https://issues.apache.org/jira/browse/HDFS-11466
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-11466.001.patch
>
>
> Per discussion on HDFS-10798, it might make sense to change the default value 
> for long write lock holds to 5000ms like the read threshold, to avoid 
> spamming the log.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11382) Persist Erasure Coding Policy ID in a new optional field in INodeFile in FSImage

2017-02-27 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886963#comment-15886963
 ] 

Manoj Govindassamy commented on HDFS-11382:
---

Unit test failures in TestDataNodeVolumeFailure* are not related to the patch. 
These tests have gone flaky recently. But passing through for me locally. 

> Persist Erasure Coding Policy ID in a new optional field in INodeFile in 
> FSImage
> 
>
> Key: HDFS-11382
> URL: https://issues.apache.org/jira/browse/HDFS-11382
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11382.01.patch, HDFS-11382.02.patch, 
> HDFS-11382.03.patch, HDFS-11382.04.patch, HDFS-11382.05.patch
>
>
> For Erasure Coded files, replication field in INodeFile message is re-used  
> to store the EC Policy ID. 
> *FSDirWriteFileOp#addFile*
> {noformat}
>   private static INodesInPath addFile(
>   FSDirectory fsd, INodesInPath existing, byte[] localName,
>   PermissionStatus permissions, short replication, long 
> preferredBlockSize,
>   String clientName, String clientMachine)
>   throws IOException {
> .. .. ..
> try {
>   ErasureCodingPolicy ecPolicy = FSDirErasureCodingOp.
>   getErasureCodingPolicy(fsd.getFSNamesystem(), existing);
>   if (ecPolicy != null) {
> replication = ecPolicy.getId();   <===
>   }
>   final BlockType blockType = ecPolicy != null?
>   BlockType.STRIPED : BlockType.CONTIGUOUS;
>   INodeFile newNode = newINodeFile(fsd.allocateNewInodeId(), permissions,
>   modTime, modTime, replication, preferredBlockSize, blockType);
>   newNode.setLocalName(localName);
>   newNode.toUnderConstruction(clientName, clientMachine);
>   newiip = fsd.addINode(existing, newNode, permissions.getPermission());
> {noformat}
> With HDFS-11268 fix, {{FSImageFormatPBINode#Loader#loadInodeFile}} is rightly 
> getting the EC ID from the replication field and then uses the right Policy 
> to construct the blocks.
> *FSImageFormatPBINode#Loader#loadInodeFile*
> {noformat}
>   ErasureCodingPolicy ecPolicy = (blockType == BlockType.STRIPED) ?
>   ErasureCodingPolicyManager.getPolicyByPolicyID((byte) replication) :
>   null;
> {noformat}
> The original intention was to re-use the replication field so the in-memory 
> representation would be compact. But, this isn't necessary for the on-disk 
> representation. replication is an optional field, and if we add another 
> optional field for the EC policy, it won't be any extra space.
> Also, we need to make sure to have the appropriate asserts in place to make 
> sure both fields aren't set for the same INodeField.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10798) Make the threshold of reporting FSNamesystem lock contention configurable

2017-02-27 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886958#comment-15886958
 ] 

Zhe Zhang commented on HDFS-10798:
--

5000ms default for writeLock sounds OK. Thanks for the discussion Erik and 
Andrew.

> Make the threshold of reporting FSNamesystem lock contention configurable
> -
>
> Key: HDFS-10798
> URL: https://issues.apache.org/jira/browse/HDFS-10798
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: logging, namenode
>Reporter: Zhe Zhang
>Assignee: Erik Krogen
>  Labels: newbie
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HDFS-10789.001.patch, HDFS-10789.002.patch
>
>
> Currently {{FSNamesystem#WRITELOCK_REPORTING_THRESHOLD}} is set at 1 second. 
> In a busy system a lower overhead might be desired. In other scenarios, more 
> aggressive reporting might be desired. We should make the threshold 
> configurable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11466) Change dfs.namenode.write-lock-reporting-threshold-ms default from 1000ms to 5000ms

2017-02-27 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886960#comment-15886960
 ] 

Zhe Zhang commented on HDFS-11466:
--

Thanks for the patch Andrew. Quick question, did you check that the old default 
has not been included in any releases?

> Change dfs.namenode.write-lock-reporting-threshold-ms default from 1000ms to 
> 5000ms
> ---
>
> Key: HDFS-11466
> URL: https://issues.apache.org/jira/browse/HDFS-11466
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-11466.001.patch
>
>
> Per discussion on HDFS-10798, it might make sense to change the default value 
> for long write lock holds to 5000ms like the read threshold, to avoid 
> spamming the log.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11414) Ozone : move StorageContainerLocation protocol to hdfs-client

2017-02-27 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886955#comment-15886955
 ] 

Chen Liang commented on HDFS-11414:
---

The findbugs warnings are for protobuf files. The failed tests are unrelated. 
Also, in my local runs, the following two tests were frequently (but not 
always) failing either with or without patches in this JIRA, while the others 
all passed. May need to investigate more in a separate JIRA.

TestDelegationTokenFetcher.testDelegationTokenWithoutRenewerViaRPC
TestDatanodeStateMachine.testDatanodeStateContext

> Ozone : move StorageContainerLocation protocol to hdfs-client
> -
>
> Key: HDFS-11414
> URL: https://issues.apache.org/jira/browse/HDFS-11414
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11414-HDFS-7240.001.patch, 
> HDFS-11414-HDFS-7240.002.patch, HDFS-11414-HDFS-7240.003.patch, 
> HDFS-11414-HDFS-7240.004.patch, HDFS-11414-HDFS-7240.004.patch
>
>
> {{StorageContainerLocation}} classes are client-facing classes of containers, 
> similar to {{XceiverClient}}, so they should be moved to hadoop-hdfs-client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11382) Persist Erasure Coding Policy ID in a new optional field in INodeFile in FSImage

2017-02-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886953#comment-15886953
 ] 

Hadoop QA commented on HDFS-11382:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
53s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
2s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11382 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12854999/HDFS-11382.04.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  cc  |
| uname | Linux 310b829f8d23 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5f5b031 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 

[jira] [Comment Edited] (HDFS-11461) DataNode Disk Outlier Detection

2017-02-27 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886922#comment-15886922
 ] 

Arpit Agarwal edited comment on HDFS-11461 at 2/28/17 12:40 AM:


[~hanishakoneru], a few comments:
# {{DataNode#shutdown}} should use slowDiskDetectionThread.join() instead of 
sleep.
# {{diskOutliers}} should maintain the mean read/write/meta latency for each 
flagged disk.
# The low threshold should be higher than 1ms (average seek latency of a 7200 
RPM disk is 4ms). Let's conservatively set this to 20ms.
# startDiskOutlierDetectionThread should call 
Thread.currentThread().interrupt() after catching InterruptedException. See 
https://www.ibm.com/developerworks/library/j-jtp05236/
# slowDiskDetectionThread should be a daemon thread.
# DataNodePeerMetrics should also use OutlierDetector constructor that accepts 
minNumResources, and pass {{10}}, to keep the behavior consistent with what we 
have.


was (Author: arpitagarwal):
[~hanishakoneru], a few comments:
# {{DataNode#shutdown}} should use slowDiskDetectionThread.join() instead of 
sleep.
# {{diskOutliers}} should maintain the mean read/write/meta latency for each 
flagged disk.
# The low threshold should be higher than 1ms (seek latency of a 7200 RPM disk 
is 4ms). Let's conservatively set this to 20ms.
# startDiskOutlierDetectionThread should call 
Thread.currentThread().interrupt() after catching InterruptedException. See 
https://www.ibm.com/developerworks/library/j-jtp05236/
# slowDiskDetectionThread should be a daemon thread.
# DataNodePeerMetrics should also use OutlierDetector constructor that accepts 
minNumResources, and pass {{10}}, to keep the behavior consistent with what we 
have.

> DataNode Disk Outlier Detection
> ---
>
> Key: HDFS-11461
> URL: https://issues.apache.org/jira/browse/HDFS-11461
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-11461.000.patch
>
>
> Similar to how DataNodes collect peer performance statistics, we can collect 
> disk performance statistics per datanode and detect outliers among them, if 
> any.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11461) DataNode Disk Outlier Detection

2017-02-27 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886922#comment-15886922
 ] 

Arpit Agarwal commented on HDFS-11461:
--

[~hanishakoneru], a few comments:
# {{DataNode#shutdown}} should use slowDiskDetectionThread.join() instead of 
sleep.
# {{diskOutliers}} should maintain the mean read/write/meta latency for each 
flagged disk.
# The low threshold should be higher than 1ms (seek latency of a 7200 RPM disk 
is 4ms). Let's conservatively set this to 20ms.
# startDiskOutlierDetectionThread should call 
Thread.currentThread().interrupt() after catching InterruptedException. See 
https://www.ibm.com/developerworks/library/j-jtp05236/
# slowDiskDetectionThread should be a daemon thread.
# DataNodePeerMetrics should also use OutlierDetector constructor that accepts 
minNumResources, and pass {{10}}, to keep the behavior consistent with what we 
have.

> DataNode Disk Outlier Detection
> ---
>
> Key: HDFS-11461
> URL: https://issues.apache.org/jira/browse/HDFS-11461
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-11461.000.patch
>
>
> Similar to how DataNodes collect peer performance statistics, we can collect 
> disk performance statistics per datanode and detect outliers among them, if 
> any.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10798) Make the threshold of reporting FSNamesystem lock contention configurable

2017-02-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886834#comment-15886834
 ] 

Andrew Wang commented on HDFS-10798:


Thanks for the background Erik. I filed a simple patch on HDFS-11466 to bump 
this default to 5000ms, we can continue the discussion there.

> Make the threshold of reporting FSNamesystem lock contention configurable
> -
>
> Key: HDFS-10798
> URL: https://issues.apache.org/jira/browse/HDFS-10798
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: logging, namenode
>Reporter: Zhe Zhang
>Assignee: Erik Krogen
>  Labels: newbie
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HDFS-10789.001.patch, HDFS-10789.002.patch
>
>
> Currently {{FSNamesystem#WRITELOCK_REPORTING_THRESHOLD}} is set at 1 second. 
> In a busy system a lower overhead might be desired. In other scenarios, more 
> aggressive reporting might be desired. We should make the threshold 
> configurable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11466) Change dfs.namenode.write-lock-reporting-threshold-ms default from 1000ms to 5000ms

2017-02-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11466:
---
Status: Patch Available  (was: Open)

> Change dfs.namenode.write-lock-reporting-threshold-ms default from 1000ms to 
> 5000ms
> ---
>
> Key: HDFS-11466
> URL: https://issues.apache.org/jira/browse/HDFS-11466
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha1, 2.8.0, 2.7.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-11466.001.patch
>
>
> Per discussion on HDFS-10798, it might make sense to change the default value 
> for long write lock holds to 5000ms like the read threshold, to avoid 
> spamming the log.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11466) Change dfs.namenode.write-lock-reporting-threshold-ms default from 1000ms to 5000ms

2017-02-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11466:
---
Attachment: HDFS-11466.001.patch

Trivial patch attached.

> Change dfs.namenode.write-lock-reporting-threshold-ms default from 1000ms to 
> 5000ms
> ---
>
> Key: HDFS-11466
> URL: https://issues.apache.org/jira/browse/HDFS-11466
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-11466.001.patch
>
>
> Per discussion on HDFS-10798, it might make sense to change the default value 
> for long write lock holds to 5000ms like the read threshold, to avoid 
> spamming the log.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11382) Persist Erasure Coding Policy ID in a new optional field in INodeFile in FSImage

2017-02-27 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11382:
--
Attachment: HDFS-11382.05.patch

Thanks for the review [~andrew.wang]. Attached v05 patch with typo and unused 
import fixed. 

> Persist Erasure Coding Policy ID in a new optional field in INodeFile in 
> FSImage
> 
>
> Key: HDFS-11382
> URL: https://issues.apache.org/jira/browse/HDFS-11382
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11382.01.patch, HDFS-11382.02.patch, 
> HDFS-11382.03.patch, HDFS-11382.04.patch, HDFS-11382.05.patch
>
>
> For Erasure Coded files, replication field in INodeFile message is re-used  
> to store the EC Policy ID. 
> *FSDirWriteFileOp#addFile*
> {noformat}
>   private static INodesInPath addFile(
>   FSDirectory fsd, INodesInPath existing, byte[] localName,
>   PermissionStatus permissions, short replication, long 
> preferredBlockSize,
>   String clientName, String clientMachine)
>   throws IOException {
> .. .. ..
> try {
>   ErasureCodingPolicy ecPolicy = FSDirErasureCodingOp.
>   getErasureCodingPolicy(fsd.getFSNamesystem(), existing);
>   if (ecPolicy != null) {
> replication = ecPolicy.getId();   <===
>   }
>   final BlockType blockType = ecPolicy != null?
>   BlockType.STRIPED : BlockType.CONTIGUOUS;
>   INodeFile newNode = newINodeFile(fsd.allocateNewInodeId(), permissions,
>   modTime, modTime, replication, preferredBlockSize, blockType);
>   newNode.setLocalName(localName);
>   newNode.toUnderConstruction(clientName, clientMachine);
>   newiip = fsd.addINode(existing, newNode, permissions.getPermission());
> {noformat}
> With HDFS-11268 fix, {{FSImageFormatPBINode#Loader#loadInodeFile}} is rightly 
> getting the EC ID from the replication field and then uses the right Policy 
> to construct the blocks.
> *FSImageFormatPBINode#Loader#loadInodeFile*
> {noformat}
>   ErasureCodingPolicy ecPolicy = (blockType == BlockType.STRIPED) ?
>   ErasureCodingPolicyManager.getPolicyByPolicyID((byte) replication) :
>   null;
> {noformat}
> The original intention was to re-use the replication field so the in-memory 
> representation would be compact. But, this isn't necessary for the on-disk 
> representation. replication is an optional field, and if we add another 
> optional field for the EC policy, it won't be any extra space.
> Also, we need to make sure to have the appropriate asserts in place to make 
> sure both fields aren't set for the same INodeField.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11466) Change dfs.namenode.write-lock-reporting-threshold-ms default from 1000ms to 5000ms

2017-02-27 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-11466:
--

 Summary: Change dfs.namenode.write-lock-reporting-threshold-ms 
default from 1000ms to 5000ms
 Key: HDFS-11466
 URL: https://issues.apache.org/jira/browse/HDFS-11466
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0-alpha1, 2.8.0, 2.7.4
Reporter: Andrew Wang
Assignee: Andrew Wang


Per discussion on HDFS-10798, it might make sense to change the default value 
for long write lock holds to 5000ms like the read threshold, to avoid spamming 
the log.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8196) Erasure Coding related information on NameNode UI

2017-02-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886785#comment-15886785
 ] 

Andrew Wang commented on HDFS-8196:
---

Looks like the 04 patch doesn't address my comments about not iterating the 
whole block map to calculate these counts, also thread safety.

I think [~manojg] is planning to add these counts in the context of HDFS-10999, 
so maybe we revisit this JIRA after that.

> Erasure Coding related information on NameNode UI
> -
>
> Key: HDFS-8196
> URL: https://issues.apache.org/jira/browse/HDFS-8196
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>  Labels: NameNode, WebUI, hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-8196.01.patch, HDFS-8196.02.patch, 
> HDFS-8196.03.patch, HDFS-8196.04.patch, Screen Shot 2017-02-06 at 
> 22.30.40.png, Screen Shot 2017-02-12 at 20.21.42.png, Screen Shot 2017-02-14 
> at 22.43.57.png
>
>
> NameNode WebUI shows EC related information and metrics. 
> This is depend on [HDFS-7674|https://issues.apache.org/jira/browse/HDFS-7674].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2017-02-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886784#comment-15886784
 ] 

Andrew Wang commented on HDFS-7859:
---

I think we've tabled this JIRA for now, so not very high urgency. We can file 
an OIV follow-on though for completeness.

> Erasure Coding: Persist erasure coding policies in NameNode
> ---
>
> Key: HDFS-7859
> URL: https://issues.apache.org/jira/browse/HDFS-7859
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
> Attachments: HDFS-7859.001.patch, HDFS-7859.002.patch, 
> HDFS-7859.004.patch, HDFS-7859.005.patch, HDFS-7859.006.patch, 
> HDFS-7859.007.patch, HDFS-7859.008.patch, HDFS-7859.009.patch, 
> HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.002.patch, 
> HDFS-7859-HDFS-7285.003.patch
>
>
> In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we 
> persist EC schemas in NameNode centrally and reliably, so that EC zones can 
> reference them by name efficiently.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11382) Persist Erasure Coding Policy ID in a new optional field in INodeFile in FSImage

2017-02-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886759#comment-15886759
 ] 

Andrew Wang commented on HDFS-11382:


Thanks for the rev Manoj, LGTM overall, +1 pending these little nits:

* FSDirWriteFileOp: typo "replictaionFactor" in addFileForEditLog
* Unused import for Preconditions in INodeFileAttributes

[~ehiggs], my concern is that encoding whether a file is erasure coded in both 
the EC policy and the BlockTypeProto fields opens us up to possible incongruity 
between the two fields. Since I'm not proposing we do away with BlockType 
entirely, I double checked the Precondition checks we have in this patch, and 
it looks okay.

Also as an FYI, HDFS-8030 wants to implement "contiguous EC," so we need a JIRA 
to rename CONTIGUOUS to REPLICATED. I filed HDFS-11465 for this if you want to 
pick it up, should be pretty easy to do this refactoring with IDE assistance.


> Persist Erasure Coding Policy ID in a new optional field in INodeFile in 
> FSImage
> 
>
> Key: HDFS-11382
> URL: https://issues.apache.org/jira/browse/HDFS-11382
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11382.01.patch, HDFS-11382.02.patch, 
> HDFS-11382.03.patch, HDFS-11382.04.patch
>
>
> For Erasure Coded files, replication field in INodeFile message is re-used  
> to store the EC Policy ID. 
> *FSDirWriteFileOp#addFile*
> {noformat}
>   private static INodesInPath addFile(
>   FSDirectory fsd, INodesInPath existing, byte[] localName,
>   PermissionStatus permissions, short replication, long 
> preferredBlockSize,
>   String clientName, String clientMachine)
>   throws IOException {
> .. .. ..
> try {
>   ErasureCodingPolicy ecPolicy = FSDirErasureCodingOp.
>   getErasureCodingPolicy(fsd.getFSNamesystem(), existing);
>   if (ecPolicy != null) {
> replication = ecPolicy.getId();   <===
>   }
>   final BlockType blockType = ecPolicy != null?
>   BlockType.STRIPED : BlockType.CONTIGUOUS;
>   INodeFile newNode = newINodeFile(fsd.allocateNewInodeId(), permissions,
>   modTime, modTime, replication, preferredBlockSize, blockType);
>   newNode.setLocalName(localName);
>   newNode.toUnderConstruction(clientName, clientMachine);
>   newiip = fsd.addINode(existing, newNode, permissions.getPermission());
> {noformat}
> With HDFS-11268 fix, {{FSImageFormatPBINode#Loader#loadInodeFile}} is rightly 
> getting the EC ID from the replication field and then uses the right Policy 
> to construct the blocks.
> *FSImageFormatPBINode#Loader#loadInodeFile*
> {noformat}
>   ErasureCodingPolicy ecPolicy = (blockType == BlockType.STRIPED) ?
>   ErasureCodingPolicyManager.getPolicyByPolicyID((byte) replication) :
>   null;
> {noformat}
> The original intention was to re-use the replication field so the in-memory 
> representation would be compact. But, this isn't necessary for the on-disk 
> representation. replication is an optional field, and if we add another 
> optional field for the EC policy, it won't be any extra space.
> Also, we need to make sure to have the appropriate asserts in place to make 
> sure both fields aren't set for the same INodeField.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11465) Rename BlockType#CONTIGUOUS to BlockType#REPLICATED

2017-02-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886753#comment-15886753
 ] 

Andrew Wang commented on HDFS-11465:


Linking some JIRAs, we discussed this a bit on HDFS-11382 as well.

> Rename BlockType#CONTIGUOUS to BlockType#REPLICATED
> ---
>
> Key: HDFS-11465
> URL: https://issues.apache.org/jira/browse/HDFS-11465
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>
> HDFS-10759 introduced the a BlockType enum to INodeFile, with possible values 
> CONTIGUOUS or STRIPED.
> Since HDFS-8030 wants to implement "contiguous EC", CONTIGUOUS isn't an 
> appropriate name. I propose we rename CONTIGUOUS to REPLICATED for clarity.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11465) Rename BlockType#CONTIGUOUS to BlockType#REPLICATED

2017-02-27 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-11465:
--

 Summary: Rename BlockType#CONTIGUOUS to BlockType#REPLICATED
 Key: HDFS-11465
 URL: https://issues.apache.org/jira/browse/HDFS-11465
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0-alpha2
Reporter: Andrew Wang


HDFS-10759 introduced the a BlockType enum to INodeFile, with possible values 
CONTIGUOUS or STRIPED.

Since HDFS-8030 wants to implement "contiguous EC", CONTIGUOUS isn't an 
appropriate name. I propose we rename CONTIGUOUS to REPLICATED for clarity.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11414) Ozone : move StorageContainerLocation protocol to hdfs-client

2017-02-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886748#comment-15886748
 ] 

Hadoop QA commented on HDFS-11414:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
47s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
30s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
56s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 9 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
37s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 
has 86 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
44s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 12 
new + 86 unchanged - 0 fixed = 98 total (was 86) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 21s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
34s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  
org.apache.hadoop.ozone.protocol.proto.StorageContainerLocationProtocolProtos$ContainerRequestProto.PARSER
 isn't final but should be  At StorageContainerLocationProtocolProtos.java:be  
At StorageContainerLocationProtocolProtos.java:[line 2859] |
|  |  Class 
org.apache.hadoop.ozone.protocol.proto.StorageContainerLocationProtocolProtos$ContainerRequestProto
 defines non-transient non-serializable instance field unknownFields  In 
StorageContainerLocationProtocolProtos.java:instance field unknownFields  In 

[jira] [Updated] (HDFS-11451) Ozone: Add protobuf definitions for container reports

2017-02-27 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11451:

Attachment: HDFS-11451-HDFS-7240.001.patch

> Ozone: Add protobuf definitions for container reports
> -
>
> Key: HDFS-11451
> URL: https://issues.apache.org/jira/browse/HDFS-11451
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-11451-HDFS-7240.001.patch
>
>
> Container reports are send from datanodes describing the state of the 
> container. Since a full container report can be as large as 2 MB, the 
> container reports are send by datanode only when SCM approves it.
> In this change, datanode informs the SCM that it has a container report 
> ready. SCM based on load will send a command to datanode to send the actual 
> report. The protobuf classes and plumbing required for that change is part of 
> this patch. This whole container reports handling will be broken into 
> multiple JIRAs to make it easy to code review.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11451) Ozone: Add protobuf definitions for container reports

2017-02-27 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11451:

Status: Patch Available  (was: Open)

> Ozone: Add protobuf definitions for container reports
> -
>
> Key: HDFS-11451
> URL: https://issues.apache.org/jira/browse/HDFS-11451
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-11451-HDFS-7240.001.patch
>
>
> Container reports are send from datanodes describing the state of the 
> container. Since a full container report can be as large as 2 MB, the 
> container reports are send by datanode only when SCM approves it.
> In this change, datanode informs the SCM that it has a container report 
> ready. SCM based on load will send a command to datanode to send the actual 
> report. The protobuf classes and plumbing required for that change is part of 
> this patch. This whole container reports handling will be broken into 
> multiple JIRAs to make it easy to code review.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11451) Ozone: Add protobuf definitions for container reports

2017-02-27 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11451:

Attachment: (was: HDFS-11451-HDFS-7240.001.patch)

> Ozone: Add protobuf definitions for container reports
> -
>
> Key: HDFS-11451
> URL: https://issues.apache.org/jira/browse/HDFS-11451
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
>
> Container reports are send from datanodes describing the state of the 
> container. Since a full container report can be as large as 2 MB, the 
> container reports are send by datanode only when SCM approves it.
> In this change, datanode informs the SCM that it has a container report 
> ready. SCM based on load will send a command to datanode to send the actual 
> report. The protobuf classes and plumbing required for that change is part of 
> this patch. This whole container reports handling will be broken into 
> multiple JIRAs to make it easy to code review.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10999) Use more generic "low redundancy" blocks instead of "under replicated" blocks

2017-02-27 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886693#comment-15886693
 ] 

Manoj Govindassamy commented on HDFS-10999:
---

[~tasanuma0829], thanks for sharing your thoughts on the proposal. Will proceed 
with this proposal unless I hear any alternative suggestions from others. 

> Use more generic "low redundancy" blocks instead of "under replicated" blocks
> -
>
> Key: HDFS-10999
> URL: https://issues.apache.org/jira/browse/HDFS-10999
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>  Labels: hdfs-ec-3.0-nice-to-have, supportability
>
> Per HDFS-9857, it seems in the Hadoop 3 world, people prefer the more generic 
> term "low redundancy" to the old-fashioned "under replicated". But this term 
> is still being used in messages in several places, such as web ui, dfsadmin 
> and fsck. We should probably change them to avoid confusion.
> File this jira to discuss it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11428) Change setErasureCodingPolicy to take a required string EC policy name

2017-02-27 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886685#comment-15886685
 ] 

Wei-Chiu Chuang commented on HDFS-11428:


[~andrew.wang] yep that's good for me. Thanks a lot for the work.

> Change setErasureCodingPolicy to take a required string EC policy name
> --
>
> Key: HDFS-11428
> URL: https://issues.apache.org/jira/browse/HDFS-11428
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11428.001.patch, HDFS-11428.002.patch, 
> HDFS-11428.003.patch, HDFS-11428.004.patch
>
>
> The current {{setErasureCodingPolicy}} API takes an optional {{ECPolicy}}. 
> This makes calling the API harder for clients, since they need to turn a 
> specified name into a policy, and the set of available EC policies is only 
> available on the NN.
> You can see this awkwardness in the current EC cli set command: it first 
> fetches the list of EC policies, looks for the one specified by the user, 
> then calls set. This means we need to issue two RPCs for every set 
> (inefficient), and we need to do validation on the NN side anyway (extraneous 
> work).
> Since we're phasing out the system default EC policy, it also makes sense to 
> make the policy a required parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11428) Change setErasureCodingPolicy to take a required string EC policy name

2017-02-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11428:
---
Attachment: HDFS-11428.004.patch

Thanks for the review Rakesh, new patch attached that should address your 
comments.

Wei-chiu, is this okay with you? I'd like to address any locking concerns in a 
separate patch, to keep this one focused on the change at hand.

> Change setErasureCodingPolicy to take a required string EC policy name
> --
>
> Key: HDFS-11428
> URL: https://issues.apache.org/jira/browse/HDFS-11428
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11428.001.patch, HDFS-11428.002.patch, 
> HDFS-11428.003.patch, HDFS-11428.004.patch
>
>
> The current {{setErasureCodingPolicy}} API takes an optional {{ECPolicy}}. 
> This makes calling the API harder for clients, since they need to turn a 
> specified name into a policy, and the set of available EC policies is only 
> available on the NN.
> You can see this awkwardness in the current EC cli set command: it first 
> fetches the list of EC policies, looks for the one specified by the user, 
> then calls set. This means we need to issue two RPCs for every set 
> (inefficient), and we need to do validation on the NN side anyway (extraneous 
> work).
> Since we're phasing out the system default EC policy, it also makes sense to 
> make the policy a required parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11382) Persist Erasure Coding Policy ID in a new optional field in INodeFile in FSImage

2017-02-27 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11382:
--
Attachment: HDFS-11382.04.patch

Thanks for the review [~ehiggs]. Attached v04 patch which addresses the 
following
* Added testStripedLayoutRedundancy in {{TestStripedINodeFile}} to test 
INodeFile construction for STRIPED layout redundancy, with error cases.
* Added testContiguousLayoutRedundancy in {{TestINodeFile}} to test CONTIGUOUS 
layout redundancy INodeFile construction, with all error cases.

> Persist Erasure Coding Policy ID in a new optional field in INodeFile in 
> FSImage
> 
>
> Key: HDFS-11382
> URL: https://issues.apache.org/jira/browse/HDFS-11382
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11382.01.patch, HDFS-11382.02.patch, 
> HDFS-11382.03.patch, HDFS-11382.04.patch
>
>
> For Erasure Coded files, replication field in INodeFile message is re-used  
> to store the EC Policy ID. 
> *FSDirWriteFileOp#addFile*
> {noformat}
>   private static INodesInPath addFile(
>   FSDirectory fsd, INodesInPath existing, byte[] localName,
>   PermissionStatus permissions, short replication, long 
> preferredBlockSize,
>   String clientName, String clientMachine)
>   throws IOException {
> .. .. ..
> try {
>   ErasureCodingPolicy ecPolicy = FSDirErasureCodingOp.
>   getErasureCodingPolicy(fsd.getFSNamesystem(), existing);
>   if (ecPolicy != null) {
> replication = ecPolicy.getId();   <===
>   }
>   final BlockType blockType = ecPolicy != null?
>   BlockType.STRIPED : BlockType.CONTIGUOUS;
>   INodeFile newNode = newINodeFile(fsd.allocateNewInodeId(), permissions,
>   modTime, modTime, replication, preferredBlockSize, blockType);
>   newNode.setLocalName(localName);
>   newNode.toUnderConstruction(clientName, clientMachine);
>   newiip = fsd.addINode(existing, newNode, permissions.getPermission());
> {noformat}
> With HDFS-11268 fix, {{FSImageFormatPBINode#Loader#loadInodeFile}} is rightly 
> getting the EC ID from the replication field and then uses the right Policy 
> to construct the blocks.
> *FSImageFormatPBINode#Loader#loadInodeFile*
> {noformat}
>   ErasureCodingPolicy ecPolicy = (blockType == BlockType.STRIPED) ?
>   ErasureCodingPolicyManager.getPolicyByPolicyID((byte) replication) :
>   null;
> {noformat}
> The original intention was to re-use the replication field so the in-memory 
> representation would be compact. But, this isn't necessary for the on-disk 
> representation. replication is an optional field, and if we add another 
> optional field for the EC policy, it won't be any extra space.
> Also, we need to make sure to have the appropriate asserts in place to make 
> sure both fields aren't set for the same INodeField.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11450) HDFS specific network topology classes with storage type info included

2017-02-27 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11450:
--
Attachment: HDFS-11450.002.patch

upload v002 patch to the fix the license and javadoc issue. Failed tests are 
unrelated.

> HDFS specific network topology classes with storage type info included
> --
>
> Key: HDFS-11450
> URL: https://issues.apache.org/jira/browse/HDFS-11450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11450.001.patch, HDFS-11450.002.patch
>
>
> This JIRA adds storage type info into network topology.
> More specifically, this JIRA adds a storage type map by extending 
> {{InnerNodeImpl}} to describe the available storages under the current node's 
> subtree. This map is updated when a node is added/removed from the subtree.
> With this info, when choosing a random node with storage type requirement, 
> the search could then decide to/not to go deeper into a subtree by examining 
> the available storage types first.
> One to-do item still, is that, we might still need to separately handle the 
> cases where a Datanodes restarts, or a disk is hot-swapped, will file another 
> JIRA in that case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11455) Fix javadoc warnings in HDFS that caused by deprecated FileSystem APIs

2017-02-27 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886638#comment-15886638
 ] 

Mingliang Liu commented on HDFS-11455:
--

The patch looks good to me overall.

One comment is that, for calls to exists and then isDirectory, we can combine 
them together with one getFileStatus call. This is not heavily needed though, 
as we deliberately left the exists not deprecated.
One possible fix is that,
{code}
try {
  FileStatus status = fs.getFileStatus(path);
  if (status != null && status.isDirectory) {
..
  }
catch (FileNotFoundException e) {
  fail("File does not exist in test");
}
{code}

[~ste...@apache.org] do you have comments on this? Thanks.

> Fix javadoc warnings in HDFS that caused by deprecated FileSystem APIs
> --
>
> Key: HDFS-11455
> URL: https://issues.apache.org/jira/browse/HDFS-11455
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11455.001.patch
>
>
> There are many javadoc warnings coming out after FileSystem APIs which 
> promote inefficient call patterns being deprecated in HADOOP-13321. The 
> relevant warnings:
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java:[320,18]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java:[1409,18]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java:[778,19]
>  [deprecation] isDirectory(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java:[787,20]
>  [deprecation] isDirectory(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestQuotaByStorageType.java:[834,18]
>  [deprecation] isFile(Path) in FileSystem has been 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11461) DataNode Disk Outlier Detection

2017-02-27 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886585#comment-15886585
 ] 

Hanisha Koneru commented on HDFS-11461:
---

The intention here is to catch slow disks, as opposed to overloaded disks. So 
it would not matter if the disk traffic is actually from HDFS or not. We want 
to find out the overall latency of disk over a long time interval to avoid 
false positives due to transient traffic.

This information will only be exposed via jmx so that admins can get a 
reference on potentially slow disks. They can then run diagnostics or take 
whatever action as deemed fit.

> DataNode Disk Outlier Detection
> ---
>
> Key: HDFS-11461
> URL: https://issues.apache.org/jira/browse/HDFS-11461
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-11461.000.patch
>
>
> Similar to how DataNodes collect peer performance statistics, we can collect 
> disk performance statistics per datanode and detect outliers among them, if 
> any.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11414) Ozone : move StorageContainerLocation protocol to hdfs-client

2017-02-27 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11414:
--
Attachment: HDFS-11414-HDFS-7240.004.patch

submit v004 patch again to get another build, to see if the same docker problem 
happens

> Ozone : move StorageContainerLocation protocol to hdfs-client
> -
>
> Key: HDFS-11414
> URL: https://issues.apache.org/jira/browse/HDFS-11414
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11414-HDFS-7240.001.patch, 
> HDFS-11414-HDFS-7240.002.patch, HDFS-11414-HDFS-7240.003.patch, 
> HDFS-11414-HDFS-7240.004.patch, HDFS-11414-HDFS-7240.004.patch
>
>
> {{StorageContainerLocation}} classes are client-facing classes of containers, 
> similar to {{XceiverClient}}, so they should be moved to hadoop-hdfs-client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11414) Ozone : move StorageContainerLocation protocol to hdfs-client

2017-02-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886541#comment-15886541
 ] 

Hadoop QA commented on HDFS-11414:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  4m 
50s{color} | {color:red} Docker failed to build yetus/hadoop:e809691. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11414 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12853141/HDFS-11414-HDFS-7240.004.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18456/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone : move StorageContainerLocation protocol to hdfs-client
> -
>
> Key: HDFS-11414
> URL: https://issues.apache.org/jira/browse/HDFS-11414
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11414-HDFS-7240.001.patch, 
> HDFS-11414-HDFS-7240.002.patch, HDFS-11414-HDFS-7240.003.patch, 
> HDFS-11414-HDFS-7240.004.patch
>
>
> {{StorageContainerLocation}} classes are client-facing classes of containers, 
> similar to {{XceiverClient}}, so they should be moved to hadoop-hdfs-client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11414) Ozone : move StorageContainerLocation protocol to hdfs-client

2017-02-27 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11414:
--
Status: Patch Available  (was: In Progress)

> Ozone : move StorageContainerLocation protocol to hdfs-client
> -
>
> Key: HDFS-11414
> URL: https://issues.apache.org/jira/browse/HDFS-11414
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11414-HDFS-7240.001.patch, 
> HDFS-11414-HDFS-7240.002.patch, HDFS-11414-HDFS-7240.003.patch, 
> HDFS-11414-HDFS-7240.004.patch
>
>
> {{StorageContainerLocation}} classes are client-facing classes of containers, 
> similar to {{XceiverClient}}, so they should be moved to hadoop-hdfs-client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11036) Ozone : reuse Xceiver connection

2017-02-27 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886509#comment-15886509
 ] 

Xiaoyu Yao commented on HDFS-11036:
---

[~vagarychen], we should add exceptions for ozone related protobuf-generated 
files. This can be done in a separate ticket. 

+1 for v005 patch. 

> Ozone : reuse Xceiver connection
> 
>
> Key: HDFS-11036
> URL: https://issues.apache.org/jira/browse/HDFS-11036
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11036-HDFS-7240.001.patch, 
> HDFS-11036-HDFS-7240.002.patch, HDFS-11036-HDFS-7240.003.patch, 
> HDFS-11036-HDFS-7240.004.patch, HDFS-11036-HDFS-7240.005.patch
>
>
> Currently for every IO operation calling into XceiverClientManager will 
> open/close a connection, this JIRA proposes to reuse connection to reduce 
> connection setup/shutdown overhead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11184) Ozone: SCM: Make SCM use container protocol

2017-02-27 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11184:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

[~vagarychen] and [~xyao] Thanks for the reviews. I have committed this to 
HDFS-7240.

> Ozone: SCM: Make SCM use container protocol
> ---
>
> Key: HDFS-11184
> URL: https://issues.apache.org/jira/browse/HDFS-11184
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-11184-HDFS-7240.001.patch, 
> HDFS-11184-HDFS-7240.002.patch, HDFS-11184-HDFS-7240.003.patch, 
> HDFS-11184-HDFS-7240.004.patch, HDFS-11184-HDFS-7240.005.patch, 
> HDFS-11184-HDFS-7240.006.patch, HDFS-11184-HDFS-7240.007.patch
>
>
> SCM will start using container protocol to communicate with datanodes. 
> This change introduces some test failures due to some missing features which 
> will be moved to KSM. Will file separate JIRA to track disabled ozone tests. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11417) Add datanode admin command to get the storage info.

2017-02-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886480#comment-15886480
 ] 

Hadoop QA commented on HDFS-11417:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
2s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestErasureCodeBenchmarkThroughput |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11417 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12854938/HDFS-11417.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 217a81f8f64f 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5f5b031 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18455/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18455/testReport/ |
| 

[jira] [Commented] (HDFS-11336) [SPS]: Remove xAttrs when movements done or SPS disabled

2017-02-27 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886479#comment-15886479
 ] 

Uma Maheswara Rao G commented on HDFS-11336:


I understand your concern and thanks for raising it. My point was not the 
meaning of blkManger to handle namespace info. Keeping delegated API name in 
blkManager as removeXAttr will confuse, I agree.
I was only concerned on accessing namespace functionality in helper classes 
like BlockStorageMovementAttemptedItems. The key class SPS also holding 
namesystem. Is it make sense to delegate there? delighted method name can be 
like SPS#cleanBCTrackingInfo, which can clean Xattrs? Just a thought. What do 
you say [~rakeshr]? 
or Another thought is: SPS can have method called, 
SPS#notifyBlkStorageMovementFinished. This method can clean up required 
resources.


> [SPS]: Remove xAttrs when movements done or SPS disabled
> 
>
> Key: HDFS-11336
> URL: https://issues.apache.org/jira/browse/HDFS-11336
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-11336-HDFS-10285.001.patch, 
> HDFS-11336-HDFS-10285.002.patch, HDFS-11336-HDFS-10285.003.patch
>
>
> 1. When we finish the movement successfully, we should clean Xattrs.
> 2. When we disable SPS dynamically, we should clean Xattrs



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9868) Add ability for DistCp to run between 2 clusters

2017-02-27 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886478#comment-15886478
 ] 

Xiao Chen commented on HDFS-9868:
-

{quote}
bq. separate the /log4j.properties change to a new jira
This is for test and helps to print logs when running the test - helpful for 
debugging and test failure analysis. So I think we can keep it here.
{quote}
Created HADOOP-14127 for it.

> Add ability for DistCp to run between 2 clusters
> 
>
> Key: HDFS-9868
> URL: https://issues.apache.org/jira/browse/HDFS-9868
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp
>Affects Versions: 2.7.1
>Reporter: NING DING
>Assignee: NING DING
> Attachments: HDFS-9868.05.patch, HDFS-9868.06.patch, 
> HDFS-9868.07.patch, HDFS-9868.08.patch, HDFS-9868.09.patch, 
> HDFS-9868.10.patch, HDFS-9868.1.patch, HDFS-9868.2.patch, HDFS-9868.3.patch, 
> HDFS-9868.4.patch
>
>
> Normally the HDFS cluster is HA enabled. It could take a long time when 
> coping huge data by distp. If the source cluster changes active namenode, the 
> distp will run failed. This patch supports the DistCp can read source cluster 
> files in HA access mode. A source cluster configuration file needs to be 
> specified (via the -sourceClusterConf option).
>   The following is an example of the contents of a source cluster 
> configuration
>   file:
> {code:xml}
> 
>   
>   fs.defaultFS
>   hdfs://mycluster
> 
> 
>   dfs.nameservices
>   mycluster
> 
> 
>   dfs.ha.namenodes.mycluster
>   nn1,nn2
> 
> 
>   dfs.namenode.rpc-address.mycluster.nn1
>   host1:9000
> 
> 
>   dfs.namenode.rpc-address.mycluster.nn2
>   host2:9000
> 
> 
>   dfs.namenode.http-address.mycluster.nn1
>   host1:50070
> 
> 
>   dfs.namenode.http-address.mycluster.nn2
>   host2:50070
> 
> 
>   dfs.client.failover.proxy.provider.mycluster
>   
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> 
>   
> {code}
>   The invocation of DistCp is as below:
> {code}
> bash$ hadoop distcp -sourceClusterConf sourceCluster.xml /foo/bar 
> hdfs://nn2:8020/bar/foo
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11184) Ozone: SCM: Make SCM use container protocol

2017-02-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886447#comment-15886447
 ] 

Hadoop QA commented on HDFS-11184:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
35s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
46s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 
has 86 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
54s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 10 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
20s{color} | {color:green} hadoop-hdfs-project generated 0 new + 119 unchanged 
- 1 fixed = 119 total (was 120) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 10 
unchanged - 2 fixed = 10 total (was 12) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs generated 0 new + 9 
unchanged - 1 fixed = 9 total (was 10) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
28s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDelegationTokenFetcher |
|   | hadoop.ozone.container.common.TestDatanodeStateMachine |
|   | hadoop.hdfs.tools.TestDFSAdmin |
|   | hadoop.ozone.web.client.TestBuckets |
|   | hadoop.ozone.scm.TestAllocateContainer |
|   | hadoop.ozone.web.client.TestVolume |
|   | hadoop.ozone.web.TestOzoneWebAccess |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-11184 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-11036) Ozone : reuse Xceiver connection

2017-02-27 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886279#comment-15886279
 ] 

Chen Liang commented on HDFS-11036:
---

The findbug warnings were all from the protobuf-generated file.

> Ozone : reuse Xceiver connection
> 
>
> Key: HDFS-11036
> URL: https://issues.apache.org/jira/browse/HDFS-11036
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11036-HDFS-7240.001.patch, 
> HDFS-11036-HDFS-7240.002.patch, HDFS-11036-HDFS-7240.003.patch, 
> HDFS-11036-HDFS-7240.004.patch, HDFS-11036-HDFS-7240.005.patch
>
>
> Currently for every IO operation calling into XceiverClientManager will 
> open/close a connection, this JIRA proposes to reuse connection to reduce 
> connection setup/shutdown overhead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11036) Ozone : reuse Xceiver connection

2017-02-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886270#comment-15886270
 ] 

Hadoop QA commented on HDFS-11036:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
48s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
45s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 
has 86 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-11036 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12854937/HDFS-11036-HDFS-7240.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2a85fb0b7235 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / ae783b1 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18453/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18453/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18453/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone : reuse Xceiver connection
> 
>
> Key: HDFS-11036
> URL: https://issues.apache.org/jira/browse/HDFS-11036
> Project: Hadoop HDFS
>  Issue 

[jira] [Updated] (HDFS-11417) Add datanode admin command to get the storage info.

2017-02-27 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-11417:
--
Attachment: HDFS-11417.002.patch

v2 : Added ASF license for {{DatanodeStorageLocalInfo}}

> Add datanode admin command to get the storage info.
> ---
>
> Key: HDFS-11417
> URL: https://issues.apache.org/jira/browse/HDFS-11417
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 2.7.3
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-11417.001.patch, HDFS-11417.002.patch
>
>
> It is good to add one admin command for datanode to get the data directory 
> info like storage type, directory path, number of block, capacity, used 
> space. This will be help full in large cluster where DN has multiple data 
> directory configured. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11184) Ozone: SCM: Make SCM use container protocol

2017-02-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886250#comment-15886250
 ] 

Anu Engineer commented on HDFS-11184:
-

asked jenkins for rebuild, it seems that jenkins got confused.

https://builds.apache.org/view/PreCommit%20Builds/job/PreCommit-HDFS-Build/18454/


> Ozone: SCM: Make SCM use container protocol
> ---
>
> Key: HDFS-11184
> URL: https://issues.apache.org/jira/browse/HDFS-11184
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-11184-HDFS-7240.001.patch, 
> HDFS-11184-HDFS-7240.002.patch, HDFS-11184-HDFS-7240.003.patch, 
> HDFS-11184-HDFS-7240.004.patch, HDFS-11184-HDFS-7240.005.patch, 
> HDFS-11184-HDFS-7240.006.patch, HDFS-11184-HDFS-7240.007.patch
>
>
> SCM will start using container protocol to communicate with datanodes. 
> This change introduces some test failures due to some missing features which 
> will be moved to KSM. Will file separate JIRA to track disabled ozone tests. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11036) Ozone : reuse Xceiver connection

2017-02-27 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11036:
--
Attachment: HDFS-11036-HDFS-7240.005.patch

Thanks [~xiaoyu yao] for the review! Uploaded v005 patch.

> Ozone : reuse Xceiver connection
> 
>
> Key: HDFS-11036
> URL: https://issues.apache.org/jira/browse/HDFS-11036
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11036-HDFS-7240.001.patch, 
> HDFS-11036-HDFS-7240.002.patch, HDFS-11036-HDFS-7240.003.patch, 
> HDFS-11036-HDFS-7240.004.patch, HDFS-11036-HDFS-7240.005.patch
>
>
> Currently for every IO operation calling into XceiverClientManager will 
> open/close a connection, this JIRA proposes to reuse connection to reduce 
> connection setup/shutdown overhead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11463) Ozone: Add metrics for container operations and export over JMX

2017-02-27 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11463:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

Thank you both for the patch and code review. I have also reviewed and 
committed this patch to HDFS-7240.

> Ozone: Add metrics for container operations and export over JMX
> ---
>
> Key: HDFS-11463
> URL: https://issues.apache.org/jira/browse/HDFS-11463
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-11463-HDFS-7240.001.patch
>
>
> Add metrics for various container operations
> Measure these the number of ops and average latency for all the ops.
> A non-exhaustive list of ops are:
> 1) container create
> 2) container delete
> 3) container list
> 4) put small file
> 5) get small file
> etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11450) HDFS specific network topology classes with storage type info included

2017-02-27 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886198#comment-15886198
 ] 

Chen Liang commented on HDFS-11450:
---

Thanks [~linyiqun] for bringing up this point! 

This is indeed something very important to think through. And my current plan 
is actually along the line of your proposal here: essentially having the 
datanodes report their "current storage" info. I'm looking into current code 
logic to see if there is anything we can take advantage of, if not, we will 
need to add this info.

> HDFS specific network topology classes with storage type info included
> --
>
> Key: HDFS-11450
> URL: https://issues.apache.org/jira/browse/HDFS-11450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11450.001.patch
>
>
> This JIRA adds storage type info into network topology.
> More specifically, this JIRA adds a storage type map by extending 
> {{InnerNodeImpl}} to describe the available storages under the current node's 
> subtree. This map is updated when a node is added/removed from the subtree.
> With this info, when choosing a random node with storage type requirement, 
> the search could then decide to/not to go deeper into a subtree by examining 
> the available storage types first.
> One to-do item still, is that, we might still need to separately handle the 
> cases where a Datanodes restarts, or a disk is hot-swapped, will file another 
> JIRA in that case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11036) Ozone : reuse Xceiver connection

2017-02-27 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886189#comment-15886189
 ] 

Xiaoyu Yao commented on HDFS-11036:
---

Thanks [~vagarychen] for updating the patch. It looks good to me overall. I 
just have two more comments:

1. XceiverClientManager.java Line 82-83: should we protect the put operation 
with the openclient lock?

2. Can you fix the checkstyle issues?



> Ozone : reuse Xceiver connection
> 
>
> Key: HDFS-11036
> URL: https://issues.apache.org/jira/browse/HDFS-11036
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11036-HDFS-7240.001.patch, 
> HDFS-11036-HDFS-7240.002.patch, HDFS-11036-HDFS-7240.003.patch, 
> HDFS-11036-HDFS-7240.004.patch
>
>
> Currently for every IO operation calling into XceiverClientManager will 
> open/close a connection, this JIRA proposes to reuse connection to reduce 
> connection setup/shutdown overhead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11463) Ozone: Add metrics for container operations and export over JMX

2017-02-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886182#comment-15886182
 ] 

Anu Engineer commented on HDFS-11463:
-

[~msingh] Thanks for the patch. [~xyao] Thanks for the review. I will commit 
this shortly to HDFS-7240. We already have a JIRA for TestDatanodeStateMachine 
failure.


> Ozone: Add metrics for container operations and export over JMX
> ---
>
> Key: HDFS-11463
> URL: https://issues.apache.org/jira/browse/HDFS-11463
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11463-HDFS-7240.001.patch
>
>
> Add metrics for various container operations
> Measure these the number of ops and average latency for all the ops.
> A non-exhaustive list of ops are:
> 1) container create
> 2) container delete
> 3) container list
> 4) put small file
> 5) get small file
> etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11463) Ozone: Add metrics for container operations and export over JMX

2017-02-27 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11463:

Summary: Ozone: Add metrics for container operations and export over JMX  
(was: Add metrics for container operations and export over JMX)

> Ozone: Add metrics for container operations and export over JMX
> ---
>
> Key: HDFS-11463
> URL: https://issues.apache.org/jira/browse/HDFS-11463
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11463-HDFS-7240.001.patch
>
>
> Add metrics for various container operations
> Measure these the number of ops and average latency for all the ops.
> A non-exhaustive list of ops are:
> 1) container create
> 2) container delete
> 3) container list
> 4) put small file
> 5) get small file
> etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11463) Add metrics for container operations and export over JMX

2017-02-27 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886149#comment-15886149
 ] 

Xiaoyu Yao commented on HDFS-11463:
---

Thanks [~msingh] for working on this. The patch looks good to me, +1.

TestDatanodeStateMachine & TestCBlockServer seems to fail because some previous 
executed tests failed to clean up the cluster (e.g., delete directories, close 
the listening port). Can you file a separate ticket for fixing that? Thanks!

> Add metrics for container operations and export over JMX
> 
>
> Key: HDFS-11463
> URL: https://issues.apache.org/jira/browse/HDFS-11463
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11463-HDFS-7240.001.patch
>
>
> Add metrics for various container operations
> Measure these the number of ops and average latency for all the ops.
> A non-exhaustive list of ops are:
> 1) container create
> 2) container delete
> 3) container list
> 4) put small file
> 5) get small file
> etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10798) Make the threshold of reporting FSNamesystem lock contention configurable

2017-02-27 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886094#comment-15886094
 ] 

Erik Krogen commented on HDFS-10798:


My understanding of the intuition for why they would be different is that a 
long read lock hold is less serious than a long write lock hold, since other 
reads can still proceed. Also long reads may be more expected given listStatus 
and contentSummary type commands. It is also typical to have a higher 
percentage of operations be read, so potential spam volume may be heavier for 
read locks rather than write locks.

That being said, 5000ms may still be a more sensible default, erring on the 
side of lower overhead unless an operator actually uses these log statements in 
which case they can tune the threshold themselves. [~zhz], do you have any 
opinion? 

> Make the threshold of reporting FSNamesystem lock contention configurable
> -
>
> Key: HDFS-10798
> URL: https://issues.apache.org/jira/browse/HDFS-10798
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: logging, namenode
>Reporter: Zhe Zhang
>Assignee: Erik Krogen
>  Labels: newbie
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HDFS-10789.001.patch, HDFS-10789.002.patch
>
>
> Currently {{FSNamesystem#WRITELOCK_REPORTING_THRESHOLD}} is set at 1 second. 
> In a busy system a lower overhead might be desired. In other scenarios, more 
> aggressive reporting might be desired. We should make the threshold 
> configurable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-9868) Add ability for DistCp to run between 2 clusters

2017-02-27 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886076#comment-15886076
 ] 

Yongjun Zhang edited comment on HDFS-9868 at 2/27/17 4:35 PM:
--

Thanks [~xiaochen] for continuing the effort.

Looking further, I think using DistributedCache is better and safer, the trick 
is how to manage the different conf dirs passed to DistributedCache.

I think one possible solution is (this need to be well documented if we decide 
to go with this approach)
1. (user) make a copy of each conf dir, put them at a central location (such as 
where we kick off DistCp) that's accessible by DistCp, 
2. Each conf dir is required to have simple names, such as cluster1conf, 
cluster2conf
3.Then we can flatten the names as distcp_cluster1conf1 etc (include a prefix 
"distcp_" to be safer) when putting to distributed cache when running distcp
4. The confMap file entry is: 
cluster1 cluster1conf
cluster2 cluster2conf
...
5. Then with the DistributedCache API, we can get these files and pass them to 
Configuration.addResource APIs.

NOTE. DistributedCache API is obsoleted, they are moved to Job.



was (Author: yzhangal):
Thanks [~xiaochen] for continuing the effort.

Looking further, I think using DistributedCache is better and safer, the trick 
is how to manage the different conf dirs passed to DistributedCache.

I think one possible solution is (this need to be documented)
1. (user) make a copy of each conf dir, put them at a central location (such as 
where we kick off DistCp) that's accessible by DistCp, 
2. Each conf dir is required to have simple names, such as cluster1conf, 
cluster2conf
3.Then we can flatten the names as distcp_cluster1conf1 etc (include a prefix 
"distcp_" to be safer) when putting to distributed cache when running distcp
4. The confMap file entry is: 
cluster1 cluster1conf
cluster2 cluster2conf
...
5. Then with the DistributedCache API, we can get these files and pass them to 
Configuration.addResource APIs.

NOTE. DistributedCache API is obsoleted, they are moved to Job.


> Add ability for DistCp to run between 2 clusters
> 
>
> Key: HDFS-9868
> URL: https://issues.apache.org/jira/browse/HDFS-9868
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp
>Affects Versions: 2.7.1
>Reporter: NING DING
>Assignee: NING DING
> Attachments: HDFS-9868.05.patch, HDFS-9868.06.patch, 
> HDFS-9868.07.patch, HDFS-9868.08.patch, HDFS-9868.09.patch, 
> HDFS-9868.10.patch, HDFS-9868.1.patch, HDFS-9868.2.patch, HDFS-9868.3.patch, 
> HDFS-9868.4.patch
>
>
> Normally the HDFS cluster is HA enabled. It could take a long time when 
> coping huge data by distp. If the source cluster changes active namenode, the 
> distp will run failed. This patch supports the DistCp can read source cluster 
> files in HA access mode. A source cluster configuration file needs to be 
> specified (via the -sourceClusterConf option).
>   The following is an example of the contents of a source cluster 
> configuration
>   file:
> {code:xml}
> 
>   
>   fs.defaultFS
>   hdfs://mycluster
> 
> 
>   dfs.nameservices
>   mycluster
> 
> 
>   dfs.ha.namenodes.mycluster
>   nn1,nn2
> 
> 
>   dfs.namenode.rpc-address.mycluster.nn1
>   host1:9000
> 
> 
>   dfs.namenode.rpc-address.mycluster.nn2
>   host2:9000
> 
> 
>   dfs.namenode.http-address.mycluster.nn1
>   host1:50070
> 
> 
>   dfs.namenode.http-address.mycluster.nn2
>   host2:50070
> 
> 
>   dfs.client.failover.proxy.provider.mycluster
>   
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> 
>   
> {code}
>   The invocation of DistCp is as below:
> {code}
> bash$ hadoop distcp -sourceClusterConf sourceCluster.xml /foo/bar 
> hdfs://nn2:8020/bar/foo
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9868) Add ability for DistCp to run between 2 clusters

2017-02-27 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886076#comment-15886076
 ] 

Yongjun Zhang commented on HDFS-9868:
-

Thanks [~xiaochen] for continuing the effort.

Looking further, I think using DistributedCache is better and safer, the trick 
is how to manage the different conf dirs passed to DistributedCache.

I think one possible solution is (this need to be documented)
1. (user) make a copy of each conf dir, put them at a central location (such as 
where we kick off DistCp) that's accessible by DistCp, 
2. Each conf dir is required to have simple names, such as cluster1conf, 
cluster2conf
3.Then we can flatten the names as distcp_cluster1conf1 etc (include a prefix 
"distcp_" to be safer) when putting to distributed cache when running distcp
4. The confMap file entry is: 
cluster1 cluster1conf
cluster2 cluster2conf
...
5. Then with the DistributedCache API, we can get these files and pass them to 
Configuration.addResource APIs.

NOTE. DistributedCache API is obsoleted, they are moved to Job.


> Add ability for DistCp to run between 2 clusters
> 
>
> Key: HDFS-9868
> URL: https://issues.apache.org/jira/browse/HDFS-9868
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp
>Affects Versions: 2.7.1
>Reporter: NING DING
>Assignee: NING DING
> Attachments: HDFS-9868.05.patch, HDFS-9868.06.patch, 
> HDFS-9868.07.patch, HDFS-9868.08.patch, HDFS-9868.09.patch, 
> HDFS-9868.10.patch, HDFS-9868.1.patch, HDFS-9868.2.patch, HDFS-9868.3.patch, 
> HDFS-9868.4.patch
>
>
> Normally the HDFS cluster is HA enabled. It could take a long time when 
> coping huge data by distp. If the source cluster changes active namenode, the 
> distp will run failed. This patch supports the DistCp can read source cluster 
> files in HA access mode. A source cluster configuration file needs to be 
> specified (via the -sourceClusterConf option).
>   The following is an example of the contents of a source cluster 
> configuration
>   file:
> {code:xml}
> 
>   
>   fs.defaultFS
>   hdfs://mycluster
> 
> 
>   dfs.nameservices
>   mycluster
> 
> 
>   dfs.ha.namenodes.mycluster
>   nn1,nn2
> 
> 
>   dfs.namenode.rpc-address.mycluster.nn1
>   host1:9000
> 
> 
>   dfs.namenode.rpc-address.mycluster.nn2
>   host2:9000
> 
> 
>   dfs.namenode.http-address.mycluster.nn1
>   host1:50070
> 
> 
>   dfs.namenode.http-address.mycluster.nn2
>   host2:50070
> 
> 
>   dfs.client.failover.proxy.provider.mycluster
>   
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> 
>   
> {code}
>   The invocation of DistCp is as below:
> {code}
> bash$ hadoop distcp -sourceClusterConf sourceCluster.xml /foo/bar 
> hdfs://nn2:8020/bar/foo
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11464) Improve the selection in choosing storage for blocks

2017-02-27 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11464:
-
Description: 
Currently the logic in choosing storage for blocks is not a good way. It always 
uses the first valid storage of a given StorageType ({{see 
DataNodeDescriptor#chooseStorage4Block}}). This should not be a good selection. 
That means blcoks will always be written to the same volume (first volume) and 
other valid volumes have no choices. This problem is brought up by this comment 
( 
https://issues.apache.org/jira/browse/HDFS-9807?focusedCommentId=15878382=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15878382
 )

There is one solution from me:

* First, based on existing storages in one node, extract all the valid storages 
into a collection.
* Then, disrupt the order of these vaild storages, get a new collection.
* Finally, get the first storage from the new storages collection.

These steps will be executed in {{DataNodeDescriptor#chooseStorage4Block}} and 
replace current logic. I think this improvement can be done as a subtask under 
HDFS-11419. Any further comments are welcomed.


  was:
Currently the logic in choosing storage for blocks is not a good way. It always 
uses the first valid storage of a given StorageType ({{see 
DataNodeDescriptor#chooseStorage4Block}}). This should not be a good selection. 
That means blcoks will always be written to the same volume (first volume) 
until this volume has not available space. This problem is brought up by this 
comment ( 
https://issues.apache.org/jira/browse/HDFS-9807?focusedCommentId=15878382=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15878382
 )

There is one solution from me:

* First, based on existing storages in one node, extract all the valid storages 
into a collection.
* Then, disrupt the order of these vaild storages, get a new collection.
* Finally, get the first storage from the new storages collection.

These steps will be executed in {{DataNodeDescriptor#chooseStorage4Block}} and 
replace current logic. I I think this improvement can be done as a subtask 
under HDFS-11419. Any further comments are welcomed.



> Improve the selection in choosing storage for blocks
> 
>
> Key: HDFS-11464
> URL: https://issues.apache.org/jira/browse/HDFS-11464
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>
> Currently the logic in choosing storage for blocks is not a good way. It 
> always uses the first valid storage of a given StorageType ({{see 
> DataNodeDescriptor#chooseStorage4Block}}). This should not be a good 
> selection. That means blcoks will always be written to the same volume (first 
> volume) and other valid volumes have no choices. This problem is brought up 
> by this comment ( 
> https://issues.apache.org/jira/browse/HDFS-9807?focusedCommentId=15878382=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15878382
>  )
> There is one solution from me:
> * First, based on existing storages in one node, extract all the valid 
> storages into a collection.
> * Then, disrupt the order of these vaild storages, get a new collection.
> * Finally, get the first storage from the new storages collection.
> These steps will be executed in {{DataNodeDescriptor#chooseStorage4Block}} 
> and replace current logic. I think this improvement can be done as a subtask 
> under HDFS-11419. Any further comments are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11464) Improve the selection in choosing storage for blocks

2017-02-27 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11464:
-
Issue Type: Sub-task  (was: Improvement)
Parent: HDFS-11419

> Improve the selection in choosing storage for blocks
> 
>
> Key: HDFS-11464
> URL: https://issues.apache.org/jira/browse/HDFS-11464
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>
> Currently the logic in choosing storage for blocks is not a good way. It 
> always uses the first valid storage of a given StorageType ({{see 
> DataNodeDescriptor#chooseStorage4Block}}). This should not be a good 
> selection. That means blcoks will always be written to the same volume (first 
> volume) until this volume has not available space. This problem is brought up 
> by this comment ( 
> https://issues.apache.org/jira/browse/HDFS-9807?focusedCommentId=15878382=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15878382
>  )
> There is one solution from me:
> * First, based on existing storages in one node, extract all the valid 
> storages into a collection.
> * Then, disrupt the order of these vaild storages, get a new collection.
> * Finally, get the first storage from the new storages collection.
> These steps will be executed in {{DataNodeDescriptor#chooseStorage4Block}} 
> and replace current logic. I I think this improvement can be done as a 
> subtask under HDFS-11419. Any further comments are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11464) Improve the selection in choosing storage for blocks

2017-02-27 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-11464:


 Summary: Improve the selection in choosing storage for blocks
 Key: HDFS-11464
 URL: https://issues.apache.org/jira/browse/HDFS-11464
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Yiqun Lin
Assignee: Yiqun Lin


Currently the logic in choosing storage for blocks is not a good way. It always 
uses the first valid storage of a given StorageType ({{see 
DataNodeDescriptor#chooseStorage4Block}}). This should not be a good selection. 
That means blcoks will always be written to the same volume (first volume) 
until this volume has not available space. This problem is brought up by this 
comment ( 
https://issues.apache.org/jira/browse/HDFS-9807?focusedCommentId=15878382=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15878382
 )

There is one solution from me:

* First, based on existing storages in one node, extract all the valid storages 
into a collection.
* Then, disrupt the order of these vaild storages, get a new collection.
* Finally, get the first storage from the new storages collection.

These steps will be executed in {{DataNodeDescriptor#chooseStorage4Block}} and 
replace current logic. I I think this improvement can be done as a subtask 
under HDFS-11419. Any further comments are welcomed.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6708) StorageType should be encoded in the block token

2017-02-27 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885825#comment-15885825
 ] 

Ewan Higgs commented on HDFS-6708:
--

Hi,
Could I get a review on this if/when someone has time? This is slowing 
HDFS-9807 since they touch the same code.

Based on the people who previously reviewed changes to the BlockTokenIdenfier, 
I think [~chris.douglas], [~andrew.wang], and [~daryn] are good candidates for 
review. Of course, [~arpitagarwal] as well as he's the reporter.

Thanks

> StorageType should be encoded in the block token
> 
>
> Key: HDFS-6708
> URL: https://issues.apache.org/jira/browse/HDFS-6708
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 2.4.1
>Reporter: Arpit Agarwal
>Assignee: Ewan Higgs
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-6708.0001.patch, HDFS-6708.0002.patch, 
> HDFS-6708.0003.patch
>
>
> HDFS-6702 is adding support for file creation based on StorageType.
> The block token is used as a tamper-proof channel for communicating block 
> parameters from the NN to the DN during block creation. The StorageType 
> should be included in this block token.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11338) [SPS]: Fix timeout issue in unit tests caused by longger NN down time

2017-02-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885730#comment-15885730
 ] 

Hadoop QA commented on HDFS-11338:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 5s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}144m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier |
|   | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11338 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12854867/HDFS-11338-HDFS-10285.00.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ad96d5669146 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / 58240d8 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18452/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18452/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18452/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [SPS]: Fix timeout issue in unit tests 

[jira] [Commented] (HDFS-11455) Fix javadoc warnings in HDFS that caused by deprecated FileSystem APIs

2017-02-27 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885633#comment-15885633
 ] 

Yiqun Lin commented on HDFS-11455:
--

Agreed on your proposal, [~liuml07]. Have filed the JIRA YARN-6239 and attach 
the patch under the YARN module. Will filed an new JIRA under HADOOP-COMMON to 
update places in hadoop-common and hadoop-tool soon. 

> Fix javadoc warnings in HDFS that caused by deprecated FileSystem APIs
> --
>
> Key: HDFS-11455
> URL: https://issues.apache.org/jira/browse/HDFS-11455
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11455.001.patch
>
>
> There are many javadoc warnings coming out after FileSystem APIs which 
> promote inefficient call patterns being deprecated in HADOOP-13321. The 
> relevant warnings:
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java:[320,18]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java:[1409,18]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java:[778,19]
>  [deprecation] isDirectory(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java:[787,20]
>  [deprecation] isDirectory(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestQuotaByStorageType.java:[834,18]
>  [deprecation] isFile(Path) in FileSystem has been 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11455) Fix javadoc warnings in HDFS that caused by deprecated FileSystem APIs

2017-02-27 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11455:
-
Summary: Fix javadoc warnings in HDFS that caused by deprecated FileSystem 
APIs  (was: Fix javadoc warnings in HDFS caused by deprecated FileSystem APIs)

> Fix javadoc warnings in HDFS that caused by deprecated FileSystem APIs
> --
>
> Key: HDFS-11455
> URL: https://issues.apache.org/jira/browse/HDFS-11455
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11455.001.patch
>
>
> There are many javadoc warnings coming out after FileSystem APIs which 
> promote inefficient call patterns being deprecated in HADOOP-13321. The 
> relevant warnings:
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java:[320,18]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java:[1409,18]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java:[778,19]
>  [deprecation] isDirectory(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java:[787,20]
>  [deprecation] isDirectory(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestQuotaByStorageType.java:[834,18]
>  [deprecation] isFile(Path) in FileSystem has been 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9807) Add an optional StorageID to writes

2017-02-27 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885610#comment-15885610
 ] 

Ewan Higgs commented on HDFS-9807:
--

{quote}
How do you envision this interacting with VolumeChoosingPolicy?
{quote}
[~jpallas], I am planning to push the information like storageID and 
storageType {{FsVolumeList}} (as described in the last suggestion in my 
previous comment) and probably also into the interface for the 
{{VolumeChoosingPolicy}}, so it can decide to choose the volume based on all 
the info. Currently, storageID will be thrown away since the NN doesn't send 
anything useful. But with PROVIDED storage (HDFS-9806), it's important to use 
the specified storageID.

{quote}
How would this fit in with federation? With multiple block pools, a given NN 
has incomplete information about the storage being managed by a datanode. Would 
the name node be able to make good decisions with the information that it has 
available?
{quote}
Given the above, I don't plan to change the NN's 
{{DataNodeDescriptor.chooseStorage4Block}} in this change set.

> Add an optional StorageID to writes
> ---
>
> Key: HDFS-9807
> URL: https://issues.apache.org/jira/browse/HDFS-9807
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chris Douglas
>Assignee: Ewan Higgs
>
> The {{BlockPlacementPolicy}} considers specific storages, but when the 
> replica is written the DN {{VolumeChoosingPolicy}} is unaware of any 
> preference or constraints from other policies affecting placement. This 
> limits heterogeneity to the declared storage types, which are treated as 
> fungible within the target DN. It should be possible to influence or 
> constrain the DN policy to select a particular storage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11382) Persist Erasure Coding Policy ID in a new optional field in INodeFile in FSImage

2017-02-27 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885593#comment-15885593
 ] 

Ewan Higgs commented on HDFS-11382:
---

Is it possible to add more tests like {{testSaveAndLoadStripedINodeFile}} 
and/or {{createStripedINodeFile}} which create {{INodeFile}} with bad arguments 
and verify that it throws?

e.g. replication != null and a set erasureCodingPolicyID.

> Persist Erasure Coding Policy ID in a new optional field in INodeFile in 
> FSImage
> 
>
> Key: HDFS-11382
> URL: https://issues.apache.org/jira/browse/HDFS-11382
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11382.01.patch, HDFS-11382.02.patch, 
> HDFS-11382.03.patch
>
>
> For Erasure Coded files, replication field in INodeFile message is re-used  
> to store the EC Policy ID. 
> *FSDirWriteFileOp#addFile*
> {noformat}
>   private static INodesInPath addFile(
>   FSDirectory fsd, INodesInPath existing, byte[] localName,
>   PermissionStatus permissions, short replication, long 
> preferredBlockSize,
>   String clientName, String clientMachine)
>   throws IOException {
> .. .. ..
> try {
>   ErasureCodingPolicy ecPolicy = FSDirErasureCodingOp.
>   getErasureCodingPolicy(fsd.getFSNamesystem(), existing);
>   if (ecPolicy != null) {
> replication = ecPolicy.getId();   <===
>   }
>   final BlockType blockType = ecPolicy != null?
>   BlockType.STRIPED : BlockType.CONTIGUOUS;
>   INodeFile newNode = newINodeFile(fsd.allocateNewInodeId(), permissions,
>   modTime, modTime, replication, preferredBlockSize, blockType);
>   newNode.setLocalName(localName);
>   newNode.toUnderConstruction(clientName, clientMachine);
>   newiip = fsd.addINode(existing, newNode, permissions.getPermission());
> {noformat}
> With HDFS-11268 fix, {{FSImageFormatPBINode#Loader#loadInodeFile}} is rightly 
> getting the EC ID from the replication field and then uses the right Policy 
> to construct the blocks.
> *FSImageFormatPBINode#Loader#loadInodeFile*
> {noformat}
>   ErasureCodingPolicy ecPolicy = (blockType == BlockType.STRIPED) ?
>   ErasureCodingPolicyManager.getPolicyByPolicyID((byte) replication) :
>   null;
> {noformat}
> The original intention was to re-use the replication field so the in-memory 
> representation would be compact. But, this isn't necessary for the on-disk 
> representation. replication is an optional field, and if we add another 
> optional field for the EC policy, it won't be any extra space.
> Also, we need to make sure to have the appropriate asserts in place to make 
> sure both fields aren't set for the same INodeField.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11382) Persist Erasure Coding Policy ID in a new optional field in INodeFile in FSImage

2017-02-27 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885566#comment-15885566
 ] 

Ewan Higgs commented on HDFS-11382:
---

BlockTypeProto is planned to be extended here.

> Persist Erasure Coding Policy ID in a new optional field in INodeFile in 
> FSImage
> 
>
> Key: HDFS-11382
> URL: https://issues.apache.org/jira/browse/HDFS-11382
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11382.01.patch, HDFS-11382.02.patch, 
> HDFS-11382.03.patch
>
>
> For Erasure Coded files, replication field in INodeFile message is re-used  
> to store the EC Policy ID. 
> *FSDirWriteFileOp#addFile*
> {noformat}
>   private static INodesInPath addFile(
>   FSDirectory fsd, INodesInPath existing, byte[] localName,
>   PermissionStatus permissions, short replication, long 
> preferredBlockSize,
>   String clientName, String clientMachine)
>   throws IOException {
> .. .. ..
> try {
>   ErasureCodingPolicy ecPolicy = FSDirErasureCodingOp.
>   getErasureCodingPolicy(fsd.getFSNamesystem(), existing);
>   if (ecPolicy != null) {
> replication = ecPolicy.getId();   <===
>   }
>   final BlockType blockType = ecPolicy != null?
>   BlockType.STRIPED : BlockType.CONTIGUOUS;
>   INodeFile newNode = newINodeFile(fsd.allocateNewInodeId(), permissions,
>   modTime, modTime, replication, preferredBlockSize, blockType);
>   newNode.setLocalName(localName);
>   newNode.toUnderConstruction(clientName, clientMachine);
>   newiip = fsd.addINode(existing, newNode, permissions.getPermission());
> {noformat}
> With HDFS-11268 fix, {{FSImageFormatPBINode#Loader#loadInodeFile}} is rightly 
> getting the EC ID from the replication field and then uses the right Policy 
> to construct the blocks.
> *FSImageFormatPBINode#Loader#loadInodeFile*
> {noformat}
>   ErasureCodingPolicy ecPolicy = (blockType == BlockType.STRIPED) ?
>   ErasureCodingPolicyManager.getPolicyByPolicyID((byte) replication) :
>   null;
> {noformat}
> The original intention was to re-use the replication field so the in-memory 
> representation would be compact. But, this isn't necessary for the on-disk 
> representation. replication is an optional field, and if we add another 
> optional field for the EC policy, it won't be any extra space.
> Also, we need to make sure to have the appropriate asserts in place to make 
> sure both fields aren't set for the same INodeField.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11338) [SPS]: Fix timeout issue in unit tests caused by longger NN down time

2017-02-27 Thread Wei Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhou updated HDFS-11338:

Status: Patch Available  (was: Open)

> [SPS]: Fix timeout issue in unit tests caused by longger NN down time
> -
>
> Key: HDFS-11338
> URL: https://issues.apache.org/jira/browse/HDFS-11338
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Wei Zhou
>Assignee: Wei Zhou
> Attachments: HDFS-11338-HDFS-10285.00.patch
>
>
> As discussed in HDFS-11186, it takes longer to stop NN:
> {code}
> try {
>   storagePolicySatisfierThread.join(3000);
> } catch (InterruptedException ie) {
> }
> {code}
> So, it takes longer time to finish some tests and this leads to the timeout 
> failures.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11428) Change setErasureCodingPolicy to take a required string EC policy name

2017-02-27 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885522#comment-15885522
 ] 

Rakesh R commented on HDFS-11428:
-

Thanks [~andrew.wang] for the useful improvement. Overall patch looks good to 
me. I've few suggestions,
# Its good to update javadoc with {{@throws IllegalArgumentException if the 
given ecPolicyName is invalid}}.
# Minor suggestion to validate null {{ecPolicyName}}, otw it will throw NPE.
{code}
DistributedFileSystem#setErasureCodingPolicy()

if (ecPolicyName == null) {
  throw new IOException("Invalid erasure coding policy name");
}
{code}

> Change setErasureCodingPolicy to take a required string EC policy name
> --
>
> Key: HDFS-11428
> URL: https://issues.apache.org/jira/browse/HDFS-11428
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11428.001.patch, HDFS-11428.002.patch, 
> HDFS-11428.003.patch
>
>
> The current {{setErasureCodingPolicy}} API takes an optional {{ECPolicy}}. 
> This makes calling the API harder for clients, since they need to turn a 
> specified name into a policy, and the set of available EC policies is only 
> available on the NN.
> You can see this awkwardness in the current EC cli set command: it first 
> fetches the list of EC policies, looks for the one specified by the user, 
> then calls set. This means we need to issue two RPCs for every set 
> (inefficient), and we need to do validation on the NN side anyway (extraneous 
> work).
> Since we're phasing out the system default EC policy, it also makes sense to 
> make the policy a required parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11338) [SPS]: Fix timeout issue in unit tests caused by longger NN down time

2017-02-27 Thread Wei Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhou updated HDFS-11338:

Attachment: HDFS-11338-HDFS-10285.00.patch

The initial patch.

> [SPS]: Fix timeout issue in unit tests caused by longger NN down time
> -
>
> Key: HDFS-11338
> URL: https://issues.apache.org/jira/browse/HDFS-11338
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Wei Zhou
>Assignee: Wei Zhou
> Attachments: HDFS-11338-HDFS-10285.00.patch
>
>
> As discussed in HDFS-11186, it takes longer to stop NN:
> {code}
> try {
>   storagePolicySatisfierThread.join(3000);
> } catch (InterruptedException ie) {
> }
> {code}
> So, it takes longer time to finish some tests and this leads to the timeout 
> failures.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11463) Add metrics for container operations and export over JMX

2017-02-27 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885508#comment-15885508
 ] 

Mukul Kumar Singh commented on HDFS-11463:
--

TestDatanodeStateMachine & TestCBlockServer  are passing locally.
and TestDelegationTokenFetcher  is failing because of reason unrelated to this 
patch.

Also asflicense  &  findbugs errors are not related to this patch

> Add metrics for container operations and export over JMX
> 
>
> Key: HDFS-11463
> URL: https://issues.apache.org/jira/browse/HDFS-11463
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11463-HDFS-7240.001.patch
>
>
> Add metrics for various container operations
> Measure these the number of ops and average latency for all the ops.
> A non-exhaustive list of ops are:
> 1) container create
> 2) container delete
> 3) container list
> 4) put small file
> 5) get small file
> etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11382) Persist Erasure Coding Policy ID in a new optional field in INodeFile in FSImage

2017-02-27 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885497#comment-15885497
 ] 

Ewan Higgs commented on HDFS-11382:
---

[~andrew.wang], the main design concern I have is whether is plays nicely with 
HDFS-10867 (wherein the {{BlockTypeProto}} is extended with a {{PROVIDED}} 
type). I think the patch works with HDFS-10867 currently it can as it's only 
adding a field after {{BlockTypeProto}}. However your suggestion of removing 
{{BlockTypeProto}} brings us back to where we were with {{isStriped}} except it 
would only be the incidental existence of a field to denote the block type.

You could remove {{blockType}} but it will likely reappear in the changeset for 
HDFS-10867.

> Persist Erasure Coding Policy ID in a new optional field in INodeFile in 
> FSImage
> 
>
> Key: HDFS-11382
> URL: https://issues.apache.org/jira/browse/HDFS-11382
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11382.01.patch, HDFS-11382.02.patch, 
> HDFS-11382.03.patch
>
>
> For Erasure Coded files, replication field in INodeFile message is re-used  
> to store the EC Policy ID. 
> *FSDirWriteFileOp#addFile*
> {noformat}
>   private static INodesInPath addFile(
>   FSDirectory fsd, INodesInPath existing, byte[] localName,
>   PermissionStatus permissions, short replication, long 
> preferredBlockSize,
>   String clientName, String clientMachine)
>   throws IOException {
> .. .. ..
> try {
>   ErasureCodingPolicy ecPolicy = FSDirErasureCodingOp.
>   getErasureCodingPolicy(fsd.getFSNamesystem(), existing);
>   if (ecPolicy != null) {
> replication = ecPolicy.getId();   <===
>   }
>   final BlockType blockType = ecPolicy != null?
>   BlockType.STRIPED : BlockType.CONTIGUOUS;
>   INodeFile newNode = newINodeFile(fsd.allocateNewInodeId(), permissions,
>   modTime, modTime, replication, preferredBlockSize, blockType);
>   newNode.setLocalName(localName);
>   newNode.toUnderConstruction(clientName, clientMachine);
>   newiip = fsd.addINode(existing, newNode, permissions.getPermission());
> {noformat}
> With HDFS-11268 fix, {{FSImageFormatPBINode#Loader#loadInodeFile}} is rightly 
> getting the EC ID from the replication field and then uses the right Policy 
> to construct the blocks.
> *FSImageFormatPBINode#Loader#loadInodeFile*
> {noformat}
>   ErasureCodingPolicy ecPolicy = (blockType == BlockType.STRIPED) ?
>   ErasureCodingPolicyManager.getPolicyByPolicyID((byte) replication) :
>   null;
> {noformat}
> The original intention was to re-use the replication field so the in-memory 
> representation would be compact. But, this isn't necessary for the on-disk 
> representation. replication is an optional field, and if we add another 
> optional field for the EC policy, it won't be any extra space.
> Also, we need to make sure to have the appropriate asserts in place to make 
> sure both fields aren't set for the same INodeField.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10506) OIV's ReverseXML processor cannot reconstruct some snapshot details

2017-02-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885343#comment-15885343
 ] 

Hadoop QA commented on HDFS-10506:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
44s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}166m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_121 Timed out junit tests | 
org.apache.hadoop.hdfs.TestLeaseRecovery2 |
|   | org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
|
| JDK v1.7.0_121 Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HDFS-10506 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12854819/HDFS-10506-branch-2.01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 448d31577a7b 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016