[jira] [Resolved] (HDFS-12202) Provide new set of FileSystem API to bypass external attribute provider

2017-08-16 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang resolved HDFS-12202.
--
Resolution: Won't Fix

> Provide new set of FileSystem API to bypass external attribute provider
> ---
>
> Key: HDFS-12202
> URL: https://issues.apache.org/jira/browse/HDFS-12202
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>
> HDFS client uses 
> {code}
>   /**
>* Return a file status object that represents the path.
>* @param f The path we want information from
>* @return a FileStatus object
>* @throws FileNotFoundException when the path does not exist
>* @throws IOException see specific implementation
>*/
>   public abstract FileStatus getFileStatus(Path f) throws IOException;
>   /**
>* List the statuses of the files/directories in the given path if the path 
> is
>* a directory.
>* 
>* Does not guarantee to return the List of files/directories status in a
>* sorted order.
>* 
>* Will not return null. Expect IOException upon access error.
>* @param f given path
>* @return the statuses of the files/directories in the given patch
>* @throws FileNotFoundException when the path does not exist
>* @throws IOException see specific implementation
>*/
>   public abstract FileStatus[] listStatus(Path f) throws 
> FileNotFoundException,
>  IOException;
> {code}
> to get FileStatus of files.
> When external attribute provider (INodeAttributeProvider) is enabled for a 
> cluster, the  external attribute provider is consulted to get back some 
> relevant info (including ACL, group etc) and returned back in FileStatus, 
> There is a problem here, when we use distcp to copy files from srcCluster to 
> tgtCluster, if srcCluster has external attribute provider enabled, the data 
> we copied would contain data from attribute provider, which we may not want.
> Create this jira to add a new set of interface for distcp to use, so that 
> distcp can copy HDFS data only and bypass external attribute provider data.
> The new set API would look like
> {code}
>  /**
>* Return a file status object that represents the path.
>* @param f The path we want information from
>* @param bypassExtAttrProvider if true, bypass external attr provider
>*when it's in use.
>* @return a FileStatus object
>* @throws FileNotFoundException when the path does not exist
>* @throws IOException see specific implementation
>*/
>   public FileStatus getFileStatus(Path f,
>   final boolean bypassExtAttrProvider) throws IOException;
>   /**
>* List the statuses of the files/directories in the given path if the path 
> is
>* a directory.
>* 
>* Does not guarantee to return the List of files/directories status in a
>* sorted order.
>* 
>* Will not return null. Expect IOException upon access error.
>* @param f
>* @param bypassExtAttrProvider if true, bypass external attr provider
>*when it's in use.
>* @return
>* @throws FileNotFoundException
>* @throws IOException
>*/
>   public FileStatus[] listStatus(Path f,
>   final boolean bypassExtAttrProvider) throws FileNotFoundException,
>   IOException;
> {code}
> So when bypassExtAttrProvider is true, external attribute provider will be 
> bypassed.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12283) Ozone: DeleteKey-5: Implement SCM DeletedBlockLog

2017-08-16 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-12283:
--
Attachment: HDFS-12283-HDFS-7240.002.patch

> Ozone: DeleteKey-5: Implement SCM DeletedBlockLog
> -
>
> Key: HDFS-12283
> URL: https://issues.apache.org/jira/browse/HDFS-12283
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
> Attachments: HDFS-12283.001.patch, HDFS-12283-HDFS-7240.001.patch, 
> HDFS-12283-HDFS-7240.002.patch
>
>
> The DeletedBlockLog is a persisted log in SCM to keep tracking container 
> blocks which are under deletion. It maintains info about under-deletion 
> container blocks that notified by KSM, and the state how it is processed. We 
> can use RocksDB to implement the 1st version of the log, the schema looks like
> ||TxID||ContainerName||Block List||ProcessedCount||
> |0|c1|b1,b2,b3|0|
> |1|c2|b1|3|
> |2|c2|b2, b3|-1|
> Some explanations
> # TxID is an incremental long value transaction ID for ONE container and 
> multiple blocks
> # Container name is the name of the container
> # Block list is a list of block IDs
> # ProcessedCount is the number of times SCM has sent this record to datanode, 
> it represents the "state" of the transaction, it is in range of \[-1, 5\], -1 
> means the transaction eventually failed after some retries, 5 is the max 
> number times of retries.
> We need to define {{DeletedBlockLog}} as an interface and implement this with 
> RocksDB {{MetadataStore}} as the first version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12283) Ozone: DeleteKey-5: Implement SCM DeletedBlockLog

2017-08-16 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16128428#comment-16128428
 ] 

Yuanbo Liu commented on HDFS-12283:
---

[~cheersyang] Thanks a lot for your review.
Agree with the most part of your comments and I've addressed them in v2 patch.
{quote}
line 110: this might have problem.. e.g you have listed tx 1,3,5 (2,4 has been 
committed), then 4 won't be a valid start key anymore.
{quote}
It's quite a tricky part because committing transactions could happen in any 
time. So we decide to use a filter instead of  a startKey to avoid repeatedly 
scanning the same records. The whole idea is based on the sequentially 
increasing txid,  we record the last read txid, scan the db from the very 
beginning, and get the transactions whose id are larger than the last read 
txid, so that the invalid start key issue could be addressed.

> Ozone: DeleteKey-5: Implement SCM DeletedBlockLog
> -
>
> Key: HDFS-12283
> URL: https://issues.apache.org/jira/browse/HDFS-12283
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
> Attachments: HDFS-12283.001.patch, HDFS-12283-HDFS-7240.001.patch, 
> HDFS-12283-HDFS-7240.002.patch
>
>
> The DeletedBlockLog is a persisted log in SCM to keep tracking container 
> blocks which are under deletion. It maintains info about under-deletion 
> container blocks that notified by KSM, and the state how it is processed. We 
> can use RocksDB to implement the 1st version of the log, the schema looks like
> ||TxID||ContainerName||Block List||ProcessedCount||
> |0|c1|b1,b2,b3|0|
> |1|c2|b1|3|
> |2|c2|b2, b3|-1|
> Some explanations
> # TxID is an incremental long value transaction ID for ONE container and 
> multiple blocks
> # Container name is the name of the container
> # Block list is a list of block IDs
> # ProcessedCount is the number of times SCM has sent this record to datanode, 
> it represents the "state" of the transaction, it is in range of \[-1, 5\], -1 
> means the transaction eventually failed after some retries, 5 is the max 
> number times of retries.
> We need to define {{DeletedBlockLog}} as an interface and implement this with 
> RocksDB {{MetadataStore}} as the first version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12269) Better to return a Map rather than HashMap in getErasureCodingCodecs

2017-08-16 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16128430#comment-16128430
 ] 

Akira Ajisaka commented on HDFS-12269:
--

Mostly looks good to me. Two minor comments:
1. Would you remove unused imports in DistributedFileSystem.java and 
ClientNamenodeProtocolServerSideTranslatorPB.java?
2. In {{Map ecCodecs = new HashMap();}}, 
{{HashMap()}} can be shortened to {{HashMap<>()}}.

> Better to return a Map rather than HashMap in getErasureCodingCodecs
> 
>
> Key: HDFS-12269
> URL: https://issues.apache.org/jira/browse/HDFS-12269
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: Huafeng Wang
>Assignee: Huafeng Wang
>Priority: Minor
> Attachments: HDFS-12269.001.patch, HDFS-12269.002.patch
>
>
> Currently the getErasureCodingCodecs function defined in ClientProtocal 
> returns a Hashmap:
> {code:java}
>   HashMap getErasureCodingCodecs() throws IOException;
> {code}
> It's better to return a Map.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12269) Better to return a Map rather than HashMap in getErasureCodingCodecs

2017-08-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16128432#comment-16128432
 ] 

Hadoop QA commented on HDFS-12269:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 18s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
22s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestRPC |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12269 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882077/HDFS-12269.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a728b73b1790 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 75dd866 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20716/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20716/artifact/

[jira] [Commented] (HDFS-11082) Erasure Coding : Provide replicated EC policy to just replicating the files

2017-08-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16128475#comment-16128475
 ] 

Hadoop QA commented on HDFS-11082:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  5s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
29s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}152m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11082 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882083/HDFS-11082.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 2e011f342c6a 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 588c190 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apach

[jira] [Commented] (HDFS-12214) [SPS]: Fix review comments of StoragePolicySatisfier feature

2017-08-16 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16128498#comment-16128498
 ] 

Surendra Singh Lilhore commented on HDFS-12214:
---

bq. FSNamesystem#startActiveServices() flow is for HA and start SPS only in 
Active NN, the non-HA is starting SPS via BlockManager#activate() 

For non-HA also flow will come to the {{FSNamesystem#startActiveServices()}} 
and start the SPS. So we no need to start SPS in {{BlockManager#activate()}} .

> [SPS]: Fix review comments of StoragePolicySatisfier feature
> 
>
> Key: HDFS-12214
> URL: https://issues.apache.org/jira/browse/HDFS-12214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-12214-HDFS-10285-00.patch, 
> HDFS-12214-HDFS-10285-01.patch, HDFS-12214-HDFS-10285-02.patch, 
> HDFS-12214-HDFS-10285-03.patch, HDFS-12214-HDFS-10285-04.patch, 
> HDFS-12214-HDFS-10285-05.patch, HDFS-12214-HDFS-10285-06.patch, 
> HDFS-12214-HDFS-10285-07.patch
>
>
> This sub-task is to address [~andrew.wang]'s review comments. Please refer 
> the [review 
> comment|https://issues.apache.org/jira/browse/HDFS-10285?focusedCommentId=16103734&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16103734]
>  in HDFS-10285 umbrella jira.
> # Rename configuration property 'dfs.storage.policy.satisfier.activate' to 
> 'dfs.storage.policy.satisfier.enabled'
> # Disable SPS feature by default.
> # Rather than using the acronym (which a user might not know), maybe rename 
> "-isSpsRunning" to "-isSatisfierRunning"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12283) Ozone: DeleteKey-5: Implement SCM DeletedBlockLog

2017-08-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16128507#comment-16128507
 ] 

Hadoop QA commented on HDFS-12283:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
6s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 
1 unchanged - 0 fixed = 5 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
2s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 38s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  
org.apache.hadoop.ozone.scm.block.DeletedBlockLogImpl.getTransactions(int) does 
not release lock on all exception paths  At DeletedBlockLogImpl.java:on all 
exception paths  At DeletedBlockLogImpl.java:[line 113] |
| Failed junit tests | hadoop.scm.TestArchive |
|   | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
|   | hadoop.ozone.web.TestOzoneWebAccess |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.ozone.TestMiniOzoneCluster |
|   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
|   | hadoop.ozone.web.client.TestVolume |
|   | hadoop.ozone.web.client.TestKeysRatis |
|   | hadoop.ozone.web.client.TestBucketsRatis |
|   | hadoop.ozone.ksm.TestMultipleContainerReadWrite |
|   | hadoop.ozone.ksm.TestKeySpaceManager

[jira] [Commented] (HDFS-12258) ec -listPolicies should list all policies in system, no matter it's enabled or disabled

2017-08-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16128520#comment-16128520
 ] 

Hadoop QA commented on HDFS-12258:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 40s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
34s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}190m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestClusterTopology |
|   | hadoop.net.TestDNS |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDecommission |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12258 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882086/HDFS-12258.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  |
| uname | Linux ee8c95369190 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 58

[jira] [Commented] (HDFS-12292) Federation: Support viewfs:// schema path for DfsAdmin commands

2017-08-16 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16128531#comment-16128531
 ] 

Mukul Kumar Singh commented on HDFS-12292:
--

Hi [~erofeev], the patch looks really good to me. However the failure of  
hadoop.cli.TestHDFSCLI is related, this is because actual and expected output 
in the test differs. The expected output needs to be modified in testConf.xml
{code}
2017-08-16 13:50:58,979 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(182)) - Expected output:   
[clrQuota: Directory does not exist: /test1]
2017-08-16 13:50:58,979 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(184)) -   Actual output:   
[clrQuota: File does not exist: hdfs://localhost:53708/test1]
{code}


> Federation: Support viewfs:// schema path for DfsAdmin commands
> ---
>
> Key: HDFS-12292
> URL: https://issues.apache.org/jira/browse/HDFS-12292
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Mikhail Erofeev
>Assignee: Mikhail Erofeev
> Attachments: HDFS-12292-002.patch, HDFS-12292.patch
>
>
> Motivation:
> As of now, clients need to specify a nameservice when a cluster is federated, 
> otherwise, the exception is fired:
> {code}
> hdfs dfsadmin -setQuota 10 viewfs://vfs-root/user/uname
> setQuota: FileSystem viewfs://vfs-root/ is not an HDFS file system
> # with fs.defaultFS = viewfs://vfs-root/
> hdfs dfsadmin -setQuota 10 vfs-root/user/uname
> setQuota: FileSystem viewfs://vfs-root/ is not an HDFS file system
> # works fine thanks to https://issues.apache.org/jira/browse/HDFS-11432
> hdfs dfsadmin -setQuota 10 hdfs://users-fs/user/uname
> {code}
> This creates inconvenience, inability to rely on fs.defaultFS and forces to 
> create client-side mappings for management scripts
> Implementation:
> PathData that is passed to commands should be resolved to its actual 
> FileSystem
> Result:
> ViewFS will be resolved to the actual HDFS file system



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12182) BlockManager.metaSave does not distinguish between "under replicated" and "missing" blocks

2017-08-16 Thread Wellington Chevreuil (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16128545#comment-16128545
 ] 

Wellington Chevreuil edited comment on HDFS-12182 at 8/16/17 9:28 AM:
--

Fixed issue on TestMetaSave. Other tests failed on previous build are passing 
locally, so maybe it's a timeout issue only?


was (Author: wchevreuil):
Fixed issue on TestMetaSave. Other tests failed on previous jira are passing 
locally.

> BlockManager.metaSave does not distinguish between "under replicated" and 
> "missing" blocks
> --
>
> Key: HDFS-12182
> URL: https://issues.apache.org/jira/browse/HDFS-12182
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12182.001.patch, HDFS-12182.002.patch, 
> HDFS-12182.003.patch, HDFS-12182.004.patch, HDFS-12182-branch-2.001.patch, 
> HDFS-12182-branch-2.002.patch
>
>
> Currently, *BlockManager.metaSave* method (which is called by "-metasave" dfs 
> CLI command) reports both "under replicated" and "missing" blocks under same 
> metric *Metasave: Blocks waiting for reconstruction:* as shown on below code 
> snippet:
> {noformat}
>synchronized (neededReconstruction) {
>   out.println("Metasave: Blocks waiting for reconstruction: "
>   + neededReconstruction.size());
>   for (Block block : neededReconstruction) {
> dumpBlockMeta(block, out);
>   }
> }
> {noformat}
> *neededReconstruction* is an instance of *LowRedundancyBlocks*, which 
> actually wraps 5 priority queues currently. 4 of these queues store different 
> under replicated scenarios, but the 5th one is dedicated for corrupt/missing 
> blocks. 
> Thus, metasave report may suggest some corrupt blocks are just under 
> replicated. This can be misleading for admins and operators trying to track 
> block missing/corruption issues, and/or other issues related to 
> *BlockManager* metrics.
> I would like to propose a patch with trivial changes that would report 
> corrupt blocks separately.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12182) BlockManager.metaSave does not distinguish between "under replicated" and "missing" blocks

2017-08-16 Thread Wellington Chevreuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HDFS-12182:

Attachment: HDFS-12182-branch-2.002.patch

Fixed issue on TestMetaSave. Other tests failed on previous jira are passing 
locally.

> BlockManager.metaSave does not distinguish between "under replicated" and 
> "missing" blocks
> --
>
> Key: HDFS-12182
> URL: https://issues.apache.org/jira/browse/HDFS-12182
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12182.001.patch, HDFS-12182.002.patch, 
> HDFS-12182.003.patch, HDFS-12182.004.patch, HDFS-12182-branch-2.001.patch, 
> HDFS-12182-branch-2.002.patch
>
>
> Currently, *BlockManager.metaSave* method (which is called by "-metasave" dfs 
> CLI command) reports both "under replicated" and "missing" blocks under same 
> metric *Metasave: Blocks waiting for reconstruction:* as shown on below code 
> snippet:
> {noformat}
>synchronized (neededReconstruction) {
>   out.println("Metasave: Blocks waiting for reconstruction: "
>   + neededReconstruction.size());
>   for (Block block : neededReconstruction) {
> dumpBlockMeta(block, out);
>   }
> }
> {noformat}
> *neededReconstruction* is an instance of *LowRedundancyBlocks*, which 
> actually wraps 5 priority queues currently. 4 of these queues store different 
> under replicated scenarios, but the 5th one is dedicated for corrupt/missing 
> blocks. 
> Thus, metasave report may suggest some corrupt blocks are just under 
> replicated. This can be misleading for admins and operators trying to track 
> block missing/corruption issues, and/or other issues related to 
> *BlockManager* metrics.
> I would like to propose a patch with trivial changes that would report 
> corrupt blocks separately.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12269) Better to return a Map rather than HashMap in getErasureCodingCodecs

2017-08-16 Thread Huafeng Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huafeng Wang updated HDFS-12269:

Attachment: HDFS-12269.003.patch

> Better to return a Map rather than HashMap in getErasureCodingCodecs
> 
>
> Key: HDFS-12269
> URL: https://issues.apache.org/jira/browse/HDFS-12269
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: Huafeng Wang
>Assignee: Huafeng Wang
>Priority: Minor
> Attachments: HDFS-12269.001.patch, HDFS-12269.002.patch, 
> HDFS-12269.003.patch
>
>
> Currently the getErasureCodingCodecs function defined in ClientProtocal 
> returns a Hashmap:
> {code:java}
>   HashMap getErasureCodingCodecs() throws IOException;
> {code}
> It's better to return a Map.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12269) Better to return a Map rather than HashMap in getErasureCodingCodecs

2017-08-16 Thread Huafeng Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16128582#comment-16128582
 ] 

Huafeng Wang commented on HDFS-12269:
-

Hi [~ajisakaa], thanks for your review. I just uploaded a new patch.

> Better to return a Map rather than HashMap in getErasureCodingCodecs
> 
>
> Key: HDFS-12269
> URL: https://issues.apache.org/jira/browse/HDFS-12269
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: Huafeng Wang
>Assignee: Huafeng Wang
>Priority: Minor
> Attachments: HDFS-12269.001.patch, HDFS-12269.002.patch, 
> HDFS-12269.003.patch
>
>
> Currently the getErasureCodingCodecs function defined in ClientProtocal 
> returns a Hashmap:
> {code:java}
>   HashMap getErasureCodingCodecs() throws IOException;
> {code}
> It's better to return a Map.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12182) BlockManager.metaSave does not distinguish between "under replicated" and "missing" blocks

2017-08-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16128682#comment-16128682
 ] 

Hadoop QA commented on HDFS-12182:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m  
3s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 43s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}158m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_144 Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
| JDK v1.7.0_131 Failed junit tests | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5e40efe |
| JIRA Issue | HDFS-12182 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882101/HDFS-12182-branch-2.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ecd53f4630b4 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Person

[jira] [Commented] (HDFS-12269) Better to return a Map rather than HashMap in getErasureCodingCodecs

2017-08-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16128724#comment-16128724
 ] 

Hadoop QA commented on HDFS-12269:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m  1s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
30s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}154m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestRPC |
|   | hadoop.security.TestKDiag |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12269 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882108/HDFS-12269.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 163e8940e895 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 588c190 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/2

[jira] [Updated] (HDFS-12292) Federation: Support viewfs:// schema path for DfsAdmin commands

2017-08-16 Thread Mikhail Erofeev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Erofeev updated HDFS-12292:
---
Attachment: HDFS-12292-003.patch

> Federation: Support viewfs:// schema path for DfsAdmin commands
> ---
>
> Key: HDFS-12292
> URL: https://issues.apache.org/jira/browse/HDFS-12292
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Mikhail Erofeev
>Assignee: Mikhail Erofeev
> Attachments: HDFS-12292-002.patch, HDFS-12292-003.patch, 
> HDFS-12292.patch
>
>
> Motivation:
> As of now, clients need to specify a nameservice when a cluster is federated, 
> otherwise, the exception is fired:
> {code}
> hdfs dfsadmin -setQuota 10 viewfs://vfs-root/user/uname
> setQuota: FileSystem viewfs://vfs-root/ is not an HDFS file system
> # with fs.defaultFS = viewfs://vfs-root/
> hdfs dfsadmin -setQuota 10 vfs-root/user/uname
> setQuota: FileSystem viewfs://vfs-root/ is not an HDFS file system
> # works fine thanks to https://issues.apache.org/jira/browse/HDFS-11432
> hdfs dfsadmin -setQuota 10 hdfs://users-fs/user/uname
> {code}
> This creates inconvenience, inability to rely on fs.defaultFS and forces to 
> create client-side mappings for management scripts
> Implementation:
> PathData that is passed to commands should be resolved to its actual 
> FileSystem
> Result:
> ViewFS will be resolved to the actual HDFS file system



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12292) Federation: Support viewfs:// schema path for DfsAdmin commands

2017-08-16 Thread Mikhail Erofeev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Erofeev updated HDFS-12292:
---
Attachment: (was: HDFS-12292-003.patch)

> Federation: Support viewfs:// schema path for DfsAdmin commands
> ---
>
> Key: HDFS-12292
> URL: https://issues.apache.org/jira/browse/HDFS-12292
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Mikhail Erofeev
>Assignee: Mikhail Erofeev
> Attachments: HDFS-12292-002.patch, HDFS-12292-003.patch, 
> HDFS-12292.patch
>
>
> Motivation:
> As of now, clients need to specify a nameservice when a cluster is federated, 
> otherwise, the exception is fired:
> {code}
> hdfs dfsadmin -setQuota 10 viewfs://vfs-root/user/uname
> setQuota: FileSystem viewfs://vfs-root/ is not an HDFS file system
> # with fs.defaultFS = viewfs://vfs-root/
> hdfs dfsadmin -setQuota 10 vfs-root/user/uname
> setQuota: FileSystem viewfs://vfs-root/ is not an HDFS file system
> # works fine thanks to https://issues.apache.org/jira/browse/HDFS-11432
> hdfs dfsadmin -setQuota 10 hdfs://users-fs/user/uname
> {code}
> This creates inconvenience, inability to rely on fs.defaultFS and forces to 
> create client-side mappings for management scripts
> Implementation:
> PathData that is passed to commands should be resolved to its actual 
> FileSystem
> Result:
> ViewFS will be resolved to the actual HDFS file system



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12292) Federation: Support viewfs:// schema path for DfsAdmin commands

2017-08-16 Thread Mikhail Erofeev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Erofeev updated HDFS-12292:
---
Attachment: HDFS-12292-003.patch

> Federation: Support viewfs:// schema path for DfsAdmin commands
> ---
>
> Key: HDFS-12292
> URL: https://issues.apache.org/jira/browse/HDFS-12292
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Mikhail Erofeev
>Assignee: Mikhail Erofeev
> Attachments: HDFS-12292-002.patch, HDFS-12292-003.patch, 
> HDFS-12292.patch
>
>
> Motivation:
> As of now, clients need to specify a nameservice when a cluster is federated, 
> otherwise, the exception is fired:
> {code}
> hdfs dfsadmin -setQuota 10 viewfs://vfs-root/user/uname
> setQuota: FileSystem viewfs://vfs-root/ is not an HDFS file system
> # with fs.defaultFS = viewfs://vfs-root/
> hdfs dfsadmin -setQuota 10 vfs-root/user/uname
> setQuota: FileSystem viewfs://vfs-root/ is not an HDFS file system
> # works fine thanks to https://issues.apache.org/jira/browse/HDFS-11432
> hdfs dfsadmin -setQuota 10 hdfs://users-fs/user/uname
> {code}
> This creates inconvenience, inability to rely on fs.defaultFS and forces to 
> create client-side mappings for management scripts
> Implementation:
> PathData that is passed to commands should be resolved to its actual 
> FileSystem
> Result:
> ViewFS will be resolved to the actual HDFS file system



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12309) Incorrect null check in the AdminHelper.java

2017-08-16 Thread Oleg Danilov (JIRA)
Oleg Danilov created HDFS-12309:
---

 Summary: Incorrect null check in the AdminHelper.java
 Key: HDFS-12309
 URL: https://issues.apache.org/jira/browse/HDFS-12309
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Oleg Danilov
Priority: Trivial


'!= null' is not required there:

line 147-150:

{code:java}
public HelpCommand(Command[] commands) {
  Preconditions.checkNotNull(commands != null);
  this.commands = commands;
}
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12119) Inconsistent "Number of Under-Replicated Blocks" shown on HDFS web UI and fsck report

2017-08-16 Thread liuyiyang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuyiyang updated HDFS-12119:
-
Attachment: HDFS-12119.001.patch

> Inconsistent "Number of Under-Replicated Blocks"  shown on HDFS web UI and 
> fsck report
> --
>
> Key: HDFS-12119
> URL: https://issues.apache.org/jira/browse/HDFS-12119
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: liuyiyang
> Attachments: HDFS-12119.001.patch
>
>
> Sometimes the information "Number of Under-Replicated Blocks" shown on 
> NameNode web UI is inconsistent with the "Under-replicated blocks" 
> information shown in fsck report.
> It's easy to reproduce such a case as follows:
> 1、In a cluster with DN1(rack0)、DN2(rack1) and DN3(rack2) which stores a lot 
> of blocks, the replication factor is set to 2;
> 2、Re-allocate racks as DN1(rack0)、DN2(rack1) and DN3(rack1) ;
> 3、Restart HDFS daemons.
> Then you can find inconsistent  "Number of Under-Replicated Blocks" on web ui 
> and "Under-replicated blocks" in fsck result.
> I dug into the source code and found that  "Number of Under-Replicated 
> Blocks" on web ui consists of blocks that have less than target number of 
> replicas and blocks that have the right number of replicas, but which the 
> block manager felt were badly distributed.  In fsck result, "Under-replicated 
> blocks" are the blocks that have less than target number of replicas, and 
> "Mis-replicated blocks" are the blocks that are badly distributed.  So the 
> Under-Replicated Blocks info on web UI and fsck result may be inconsistent.
> Since Under-Replicated Blocks means higer missing risk for blocks, when 
> threre is  no blocks that have less than target replicas but a lot of blocks 
> that have the right number of blocks but are badlly distributed, the "Number 
> of Under-Replicated Blocks" on web UI will be same as number of 
> "Mis-replicated blocks", which is misleading for users. 
> It would be clear to make "Number of Under-Replicated Blocks" on web UI be 
> consistent with "Under-replicated blocks" in fsck result. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12119) Inconsistent "Number of Under-Replicated Blocks" shown on HDFS web UI and fsck report

2017-08-16 Thread liuyiyang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16128841#comment-16128841
 ] 

liuyiyang commented on HDFS-12119:
--

I attached a patch for this problem, can someone please review it? Thanks so 
much.

> Inconsistent "Number of Under-Replicated Blocks"  shown on HDFS web UI and 
> fsck report
> --
>
> Key: HDFS-12119
> URL: https://issues.apache.org/jira/browse/HDFS-12119
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: liuyiyang
> Attachments: HDFS-12119.001.patch
>
>
> Sometimes the information "Number of Under-Replicated Blocks" shown on 
> NameNode web UI is inconsistent with the "Under-replicated blocks" 
> information shown in fsck report.
> It's easy to reproduce such a case as follows:
> 1、In a cluster with DN1(rack0)、DN2(rack1) and DN3(rack2) which stores a lot 
> of blocks, the replication factor is set to 2;
> 2、Re-allocate racks as DN1(rack0)、DN2(rack1) and DN3(rack1) ;
> 3、Restart HDFS daemons.
> Then you can find inconsistent  "Number of Under-Replicated Blocks" on web ui 
> and "Under-replicated blocks" in fsck result.
> I dug into the source code and found that  "Number of Under-Replicated 
> Blocks" on web ui consists of blocks that have less than target number of 
> replicas and blocks that have the right number of replicas, but which the 
> block manager felt were badly distributed.  In fsck result, "Under-replicated 
> blocks" are the blocks that have less than target number of replicas, and 
> "Mis-replicated blocks" are the blocks that are badly distributed.  So the 
> Under-Replicated Blocks info on web UI and fsck result may be inconsistent.
> Since Under-Replicated Blocks means higer missing risk for blocks, when 
> threre is  no blocks that have less than target replicas but a lot of blocks 
> that have the right number of blocks but are badlly distributed, the "Number 
> of Under-Replicated Blocks" on web UI will be same as number of 
> "Mis-replicated blocks", which is misleading for users. 
> It would be clear to make "Number of Under-Replicated Blocks" on web UI be 
> consistent with "Under-replicated blocks" in fsck result. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12292) Federation: Support viewfs:// schema path for DfsAdmin commands

2017-08-16 Thread Mikhail Erofeev (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16128860#comment-16128860
 ] 

Mikhail Erofeev commented on HDFS-12292:


Thanks [~msingh] for looking into this jira and for pointing to this issue, I 
noticed only timeout in TestHDFSCLI message.
I fixed tests as you suggested, but now I am worried that the documentation in 
HdfsQuotaAdminGuide.md is not exactly precise, because the current exception 
will throw "File does not exist" instead of "Directory does not exist" as 
stated in the text. Moreover, depending on hdfs://test or /test, different 
exception texts will be sent, depending on which step it will fail. 
I think that the current solution is ok as the exception is precise enough at 
this level of abstraction. The method is used not only for quota commands, so 
not only dirs should be presented. And there should be an exception at this 
common level, as DfsAdmin commands operate only with existing files/dirs.
Please correct me if I am wrong, I am new to Hadoop contribution.

> Federation: Support viewfs:// schema path for DfsAdmin commands
> ---
>
> Key: HDFS-12292
> URL: https://issues.apache.org/jira/browse/HDFS-12292
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Mikhail Erofeev
>Assignee: Mikhail Erofeev
> Attachments: HDFS-12292-002.patch, HDFS-12292-003.patch, 
> HDFS-12292.patch
>
>
> Motivation:
> As of now, clients need to specify a nameservice when a cluster is federated, 
> otherwise, the exception is fired:
> {code}
> hdfs dfsadmin -setQuota 10 viewfs://vfs-root/user/uname
> setQuota: FileSystem viewfs://vfs-root/ is not an HDFS file system
> # with fs.defaultFS = viewfs://vfs-root/
> hdfs dfsadmin -setQuota 10 vfs-root/user/uname
> setQuota: FileSystem viewfs://vfs-root/ is not an HDFS file system
> # works fine thanks to https://issues.apache.org/jira/browse/HDFS-11432
> hdfs dfsadmin -setQuota 10 hdfs://users-fs/user/uname
> {code}
> This creates inconvenience, inability to rely on fs.defaultFS and forces to 
> create client-side mappings for management scripts
> Implementation:
> PathData that is passed to commands should be resolved to its actual 
> FileSystem
> Result:
> ViewFS will be resolved to the actual HDFS file system



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12214) [SPS]: Fix review comments of StoragePolicySatisfier feature

2017-08-16 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-12214:

Attachment: HDFS-12214-HDFS-10285-08.patch

> [SPS]: Fix review comments of StoragePolicySatisfier feature
> 
>
> Key: HDFS-12214
> URL: https://issues.apache.org/jira/browse/HDFS-12214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-12214-HDFS-10285-00.patch, 
> HDFS-12214-HDFS-10285-01.patch, HDFS-12214-HDFS-10285-02.patch, 
> HDFS-12214-HDFS-10285-03.patch, HDFS-12214-HDFS-10285-04.patch, 
> HDFS-12214-HDFS-10285-05.patch, HDFS-12214-HDFS-10285-06.patch, 
> HDFS-12214-HDFS-10285-07.patch, HDFS-12214-HDFS-10285-08.patch
>
>
> This sub-task is to address [~andrew.wang]'s review comments. Please refer 
> the [review 
> comment|https://issues.apache.org/jira/browse/HDFS-10285?focusedCommentId=16103734&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16103734]
>  in HDFS-10285 umbrella jira.
> # Rename configuration property 'dfs.storage.policy.satisfier.activate' to 
> 'dfs.storage.policy.satisfier.enabled'
> # Disable SPS feature by default.
> # Rather than using the acronym (which a user might not know), maybe rename 
> "-isSpsRunning" to "-isSatisfierRunning"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12214) [SPS]: Fix review comments of StoragePolicySatisfier feature

2017-08-16 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16128935#comment-16128935
 ] 

Rakesh R commented on HDFS-12214:
-

Yes, I got it. Attached new patch addressing the comments. 

> [SPS]: Fix review comments of StoragePolicySatisfier feature
> 
>
> Key: HDFS-12214
> URL: https://issues.apache.org/jira/browse/HDFS-12214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-12214-HDFS-10285-00.patch, 
> HDFS-12214-HDFS-10285-01.patch, HDFS-12214-HDFS-10285-02.patch, 
> HDFS-12214-HDFS-10285-03.patch, HDFS-12214-HDFS-10285-04.patch, 
> HDFS-12214-HDFS-10285-05.patch, HDFS-12214-HDFS-10285-06.patch, 
> HDFS-12214-HDFS-10285-07.patch, HDFS-12214-HDFS-10285-08.patch
>
>
> This sub-task is to address [~andrew.wang]'s review comments. Please refer 
> the [review 
> comment|https://issues.apache.org/jira/browse/HDFS-10285?focusedCommentId=16103734&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16103734]
>  in HDFS-10285 umbrella jira.
> # Rename configuration property 'dfs.storage.policy.satisfier.activate' to 
> 'dfs.storage.policy.satisfier.enabled'
> # Disable SPS feature by default.
> # Rather than using the acronym (which a user might not know), maybe rename 
> "-isSpsRunning" to "-isSatisfierRunning"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12248) SNN will not upload fsimage on IOE and Interrupted exceptions

2017-08-16 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-12248:

Attachment: HDFS-12248-002.patch

Thanks [~shahrs87] and [~hkoneru] for taking look into this issue.

bq.We should have an AND instead of OR here to capture the case of no exception.
Yup, I missed.
bq.isPrimaryCheckPointer should be outside the if condition. If the ANN update 
was not successful, then isPrimaryCheckPointer should be set to false.
In non-exception case, {{success=false}}, if ANN fails to update, so that will 
be assigned to {{false}} only

Uploaded the patch kindly review.

> SNN will not upload fsimage on IOE and Interrupted exceptions
> -
>
> Key: HDFS-12248
> URL: https://issues.apache.org/jira/browse/HDFS-12248
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-12248-002.patch, HDFS-12248.patch
>
>
> Related to  HDFS-9787. When fsimage uploading to ANN, if there is any 
> interrupt or IOE comes {{isPrimaryCheckPointer}} set to 
> {{false}}.Rollingupgrade triggered same time then It does the checkpoint 
> without sending the fsimage since {{sendRequest}} will be {{false}}.
> So,here {{rollback}} image will not sent to ANN.
> {code}
>   } catch (ExecutionException e) {
> ioe = new IOException("Exception during image upload: " + 
> e.getMessage(),
> e.getCause());
> break;
>   } catch (InterruptedException e) {
> ie = e;
> break;
>   }
> }
> lastUploadTime = monotonicNow();
> // we are primary if we successfully updated the ANN
> this.isPrimaryCheckPointer = success;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10326) Disable setting tcp socket send/receive buffers for write pipelines

2017-08-16 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16128972#comment-16128972
 ] 

Erik Krogen commented on HDFS-10326:


[~wheat9] want to confirm, I see fix version is only 2.8.2 right now, should it 
also have 3.0 and 2.9?

> Disable setting tcp socket send/receive buffers for write pipelines
> ---
>
> Key: HDFS-10326
> URL: https://issues.apache.org/jira/browse/HDFS-10326
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.2
>
> Attachments: HDFS-10326.000.patch, HDFS-10326.001.patch, 
> HDFS-10326.001.patch
>
>
> The DataStreamer and the Datanode use a hardcoded 
> DEFAULT_DATA_SOCKET_SIZE=128K for the send and receive buffers of a write 
> pipeline.  Explicitly setting tcp buffer sizes disables tcp stack 
> auto-tuning.  
> The hardcoded value will saturate a 1Gb with 1ms RTT.  105Mbs at 10ms.  
> Paltry 11Mbs over a 100ms long haul.  10Gb networks are underutilized.
> There should either be a configuration to completely disable setting the 
> buffers, or the the setReceiveBuffer and setSendBuffer should be removed 
> entirely.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11817) A faulty node can cause a lease leak and NPE on accessing data

2017-08-16 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16128995#comment-16128995
 ] 

Brahma Reddy Battula commented on HDFS-11817:
-

I feel, this can be merged to {{branch-2.7}} as well.

> A faulty node can cause a lease leak and NPE on accessing data
> --
>
> Key: HDFS-11817
> URL: https://issues.apache.org/jira/browse/HDFS-11817
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HDFS-11817.branch-2.patch, hdfs-11817_supplement.txt, 
> HDFS-11817.v2.branch-2.8.patch, HDFS-11817.v2.branch-2.patch, 
> HDFS-11817.v2.trunk.patch
>
>
> When the namenode performs a lease recovery for a failed write, the 
> {{commitBlockSynchronization()}} will fail, if none of the new target has 
> sent a received-IBR.  At this point, the data is inaccessible, as the 
> namenode will throw a {{NullPointerException}} upon {{getBlockLocations()}}.
> The lease recovery will be retried in about an hour by the namenode. If the 
> nodes are faulty (usually when there is only one new target), they may not 
> block report until this point. If this happens, lease recovery throws an 
> {{AlreadyBeingCreatedException}}, which causes LeaseManager to simply remove 
> the lease without  finalizing the inode.  
> This results in an inconsistent lease state. The inode stays 
> under-construction, but no more lease recovery is attempted. A manual lease 
> recovery is also not allowed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12299) Race Between update pipeline and DN Re-Registration

2017-08-16 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-12299:

Attachment: HDFS-12299.patch

Uploading the test to reproduce this issue.

Duplicate of HDFS-11817(HDFS-9040 for trunk). But I feel, this testcase can be 
pushed..?

> Race Between update pipeline and DN Re-Registration
> ---
>
> Key: HDFS-12299
> URL: https://issues.apache.org/jira/browse/HDFS-12299
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-12299.patch
>
>
>  *Scenario*   
>  - Started pipeline with DN1->DN2->DN3
>  - DN1 is re-reg and update pipeline is called
>  - Update pipeline will success with DN1->DN3->DN4
>  - Again update pipeline is called,which will fail with NPE.
> In step3 updatepipeline will set the storages as null since DN1 re-reg(which 
> will remove and add the storages)
> {{FSNamesystem#updatePipelineInternal}}
> {code}
>lastBlock.getUnderConstructionFeature().setExpectedLocations(lastBlock,
> storages, lastBlock.getBlockType())
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12307) Ozone: TestKeys#testPutAndGetKeyWithDnRestart fails

2017-08-16 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129051#comment-16129051
 ] 

Xiaoyu Yao commented on HDFS-12307:
---

Is this the dup of HDFS-12216?

> Ozone: TestKeys#testPutAndGetKeyWithDnRestart fails
> ---
>
> Key: HDFS-12307
> URL: https://issues.apache.org/jira/browse/HDFS-12307
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>
> It seems this UT constantly fails with following error
> {noformat}
> org.apache.hadoop.ozone.web.exceptions.OzoneException: Exception getting 
> XceiverClient.
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:119)
>   at 
> com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createUsingDefault(StdValueInstantiator.java:243)
>   at 
> com.fasterxml.jackson.databind.deser.std.ThrowableDeserializer.deserializeFromObject(ThrowableDeserializer.java:146)
>   at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:133)
>   at 
> com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:1579)
>   at 
> com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:1200)
>   at 
> org.apache.hadoop.ozone.web.exceptions.OzoneException.parse(OzoneException.java:248)
>   at 
> org.apache.hadoop.ozone.web.client.OzoneBucket.executeGetKey(OzoneBucket.java:395)
>   at 
> org.apache.hadoop.ozone.web.client.OzoneBucket.getKey(OzoneBucket.java:321)
>   at 
> org.apache.hadoop.ozone.web.client.TestKeys.runTestPutAndGetKeyWithDnRestart(TestKeys.java:288)
>   at 
> org.apache.hadoop.ozone.web.client.TestKeys.testPutAndGetKeyWithDnRestart(TestKeys.java:265)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12292) Federation: Support viewfs:// schema path for DfsAdmin commands

2017-08-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129065#comment-16129065
 ] 

Hadoop QA commented on HDFS-12292:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 55s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m  2s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}181m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestRPC |
|   | hadoop.net.TestDNS |
|   | hadoop.hdfs.TestPersistBlocks |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12292 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882137/HDFS-12292-003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux be5d72e08fc1 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 588c190 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Bu

[jira] [Commented] (HDFS-12248) SNN will not upload fsimage on IOE and Interrupted exceptions

2017-08-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129116#comment-16129116
 ] 

Hadoop QA commented on HDFS-12248:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 6 new + 55 unchanged - 1 fixed = 61 total (was 56) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12248 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882150/HDFS-12248-002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 39be49506bcd 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 588c190 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20724/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20724/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20724/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20724/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> SNN will not upload fsimage on IOE and Interrupted exceptions
> -
>
>  

[jira] [Commented] (HDFS-12214) [SPS]: Fix review comments of StoragePolicySatisfier feature

2017-08-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129136#comment-16129136
 ] 

Hadoop QA commented on HDFS-12214:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
7s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10285 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
56s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} HDFS-10285 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
3s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-10285 has 10 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-10285 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 865 unchanged - 1 fixed = 865 total (was 866) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
24s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.TestBlockStoragePolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12214 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882149/HDFS-12214-HDFS-10285-08.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  compile  
javac  javadoc  mvninstall  findbugs  checkstyle  xml  |
| uname | Linux 38373c23b500 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / d8122f2 |
| Default Java | 1.8.0_144 |
| shellcheck | v0.4.6 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20723/artifact/patchpro

[jira] [Commented] (HDFS-12302) FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl object

2017-08-16 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129183#comment-16129183
 ] 

Chen Liang commented on HDFS-12302:
---

[~liaoyuxiangqin] hmm...this is not as straightforward as it may seem, this 
basically comes down to what are the time points when 
{{FsVolumeImpl#addBlockPool}} gets called, this is where entries get added to 
{{bqSlices}}. Things still need to be verified, I need to look more closely, 
but there two comments for now:
1. Seems in the patch a null {{ReplicaMap}} is passed to {{activateVolume}}. 
The thing is, in activateVolume, it has this line 
{code}
volumeMap.addAll(replicaMap);
{code}
which is
{code}
  void addAll(ReplicaMap other) {
map.putAll(other.map);
  }
{code}
So it seems to me that passing null ReplicaMap will trigger a 
NullPointerException when calling {{other.map}}.

2. The only place this following method
{code}
void getVolumeMap(ReplicaMap volumeMap,
final RamDiskReplicaTracker ramDiskReplicaMap)
{code}
gets called is the place that's being removed in the patch. So after the patch, 
this method becomes redundant, if we do want to make the change as in the 
patch, then we should consider also remove this method too. But again, things 
still need to be verified...

> FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl 
> object
> ---
>
> Key: HDFS-12302
> URL: https://issues.apache.org/jira/browse/HDFS-12302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
> Environment: cluster: 3 nodes
> os:(Red Hat 2.6.33.20, Red Hat 3.10.0-514.6.1.el7.x86_64, 
> Ubuntu4.4.0-31-generic)
> hadoop version: hadoop-3.0.0-alpha4
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
> Attachments: HDFS-12302.001.patch, HDFS-12302.002.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
>   When i read the  code of Instantiate FsDatasetImpl object on DataNode 
> start process, i find that the getVolumeMap function actually can't get 
> ReplicaMap info for each fsVolume, the reason is fsVolume's  bpSlices hasn't 
> been initialized in this time, the detail code as follows:
> {code:title=FsVolumeImpl.java}
> void getVolumeMap(ReplicaMap volumeMap,
> final RamDiskReplicaTracker ramDiskReplicaMap)
>   throws IOException {
> LOG.info("Added volume -  getVolumeMap bpSlices:" + 
> bpSlices.values().size());
> for(BlockPoolSlice s : bpSlices.values()) {
>   s.getVolumeMap(volumeMap, ramDiskReplicaMap);
> }
>   }
> {code}
> Then, i have add some info log and start DataNode, the log info cord with the 
> code description, the detail log info as follows:
>  INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap end
> INFO: Added new volume: DS-48ac6ef9-fd6f-49b7-a5fb-77b82cadc973
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> end
> INFO: Added new volume: DS-159b615c-144c-4d99-8b63-5f37247fb8ed
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK
> At last i think the getVolumeMap process for each fsVloume not necessary when 
> Instantiate FsDatasetImpl object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12294) Let distcp to bypass external attribute provider when calling getFileStatus etc at source cluster

2017-08-16 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129256#comment-16129256
 ] 

Yongjun Zhang commented on HDFS-12294:
--

Hi [~chris.douglas],

I thought I udpated here, but seems I did not finish.

The requirement is, if a file has attribute X in fsimage, but external provider 
overrides it to be Y, we want distcp to copy X instead of Y. 

The external provider can both add and remove attributes, My worry is that 
filtering by user name would be hacky and even not work, since the same user 
can request external attributes when not running distcp. Even when running 
distcp, possibly some calls may also need the external attribute, such as check 
permission. 

In addition, any other user is supposed to be able to run distcp too.

Do you agree?

Thanks.





> Let distcp to bypass external attribute provider when calling getFileStatus 
> etc at source cluster
> -
>
> Key: HDFS-12294
> URL: https://issues.apache.org/jira/browse/HDFS-12294
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>
> This is an alternative solution for HDFS-12202, which proposed introducing a 
> new set of API, with an additional boolean parameter bypassExtAttrProvider, 
> so to let NN bypass external attribute provider when getFileStatus. The goal 
> is to avoid distcp from copying attributes from one cluster's external 
> attribute provider and save to another cluster's fsimage.
> The solution here is, instead of having an additional parameter, encode this 
> parameter to the path itself, when calling getFileStatus (and some other 
> calls), NN will parse the path, and figure out that whether external 
> attribute provider need to be bypassed. The suggested encoding is to have a 
> prefix to the path before calling getFileStatus, e.g. /ab/c becomes 
> /.reserved/bypassExtAttr/a/b/c. NN will parse the path at the very beginning.
> Thanks much to [~andrew.wang] for this suggestion. The scope of change is 
> smaller and we don't have to change the FileSystem APIs.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12295) NameNode to support file path prefix /.reserved/bypassExtAttr

2017-08-16 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129342#comment-16129342
 ] 

Yongjun Zhang commented on HDFS-12295:
--

Hi [~daryn],

Thanks a lot for your comments!

Good point about move etc operations. We can add that. For delete, the prefix 
is like an no op. There quite some other operations, This change will become 
ugly too. 

What's your view about HDFS-12202? It doesn't need to change all the other 
operations, but suggest to add a new set of API for getFileStatus and 
listStatus. This is a hard sell too because we have to introduce the set of 
APIs, and do dummy implementation for all FileSystems even if they don't care. 
Would you please share your thoughts about this anyways?

Right now the only application is, distcp will decide whether to add the prefix 
before calling getFileStatus and listStatus methods.

About the copy-n-paste, I did it as a quick dtraft to illustrate the idea for 
discussion, will definitety put in more centralized location.

I'm not sure whether using super user to avoid consulting external permission 
will work. Would you please see my discussion with [~chris.douglas] in 
HDFS-12294?

Many Thanks.




> NameNode to support file path prefix /.reserved/bypassExtAttr
> -
>
> Key: HDFS-12295
> URL: https://issues.apache.org/jira/browse/HDFS-12295
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, namenode
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12295.001.patch, HDFS-12295.001.patch
>
>
> Let NameNode to support prefix /.reserved/bypassExtAttr, so client can add 
> thisprefix to a path before calling getFileStatus, e.g. /ab/c becomes 
> /.reserved/bypassExtAttr/a/b/c. NN will parse the path at the very beginning, 
> and bypass external attribute provider if the prefix is there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12295) NameNode to support file path prefix /.reserved/bypassExtAttr

2017-08-16 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129342#comment-16129342
 ] 

Yongjun Zhang edited comment on HDFS-12295 at 8/16/17 8:12 PM:
---

Hi [~daryn],

Thanks a lot for your comments!

Good point about move etc operations. We can add that. For delete, the prefix 
is like an no op. There quite some other operations, This change will become 
cumbersome too. 

What's your view about HDFS-12202? It doesn't need to change all the other 
operations, but suggest to add a new set of API for getFileStatus and 
listStatus. This is a hard sell too because we have to introduce the set of 
APIs, and do dummy implementation for all FileSystems even if they don't care. 
Would you please share your thoughts about this anyways?

Right now the only application is, distcp will decide whether to add the prefix 
before calling getFileStatus and listStatus methods.

About the copy-n-paste, I did it as a quick dtraft to illustrate the idea for 
discussion, will definitety put in more centralized location.

I'm not sure whether using super user to avoid consulting external permission 
will work. Would you please see my discussion with [~chris.douglas] in 
HDFS-12294?

Many Thanks.





was (Author: yzhangal):
Hi [~daryn],

Thanks a lot for your comments!

Good point about move etc operations. We can add that. For delete, the prefix 
is like an no op. There quite some other operations, This change will become 
ugly too. 

What's your view about HDFS-12202? It doesn't need to change all the other 
operations, but suggest to add a new set of API for getFileStatus and 
listStatus. This is a hard sell too because we have to introduce the set of 
APIs, and do dummy implementation for all FileSystems even if they don't care. 
Would you please share your thoughts about this anyways?

Right now the only application is, distcp will decide whether to add the prefix 
before calling getFileStatus and listStatus methods.

About the copy-n-paste, I did it as a quick dtraft to illustrate the idea for 
discussion, will definitety put in more centralized location.

I'm not sure whether using super user to avoid consulting external permission 
will work. Would you please see my discussion with [~chris.douglas] in 
HDFS-12294?

Many Thanks.




> NameNode to support file path prefix /.reserved/bypassExtAttr
> -
>
> Key: HDFS-12295
> URL: https://issues.apache.org/jira/browse/HDFS-12295
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, namenode
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12295.001.patch, HDFS-12295.001.patch
>
>
> Let NameNode to support prefix /.reserved/bypassExtAttr, so client can add 
> thisprefix to a path before calling getFileStatus, e.g. /ab/c becomes 
> /.reserved/bypassExtAttr/a/b/c. NN will parse the path at the very beginning, 
> and bypass external attribute provider if the prefix is there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12238) Ozone: Add valid trace ID check in sendCommandAsync

2017-08-16 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12238:
--
Attachment: HDFS-12238-HDFS-7240.03.patch

> Ozone: Add valid trace ID check in sendCommandAsync
> ---
>
> Key: HDFS-12238
> URL: https://issues.apache.org/jira/browse/HDFS-12238
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Ajay Kumar
>  Labels: newbie
> Attachments: HDFS-12238-HDFS-7240.01.patch, 
> HDFS-12238-HDFS-7240.02.patch, HDFS-12238-HDFS-7240.03.patch
>
>
> In the function {{XceiverClientHandler#sendCommandAsync}} we should add a 
> check 
> {code}
>    if(StringUtils.isEmpty(request.getTraceID())) {
>   throw new IllegalArgumentException("Invalid trace ID");
> }
> {code}
> To ensure that ozone clients always send a valid trace ID. However, when you 
> do that a set of current tests that do add a valid trace ID will fail. So we 
> need to fix these tests too.
> {code}
>   TestContainerMetrics.testContainerMetrics
>   TestOzoneContainer.testBothGetandPutSmallFile
>   TestOzoneContainer.testCloseContainer
>   TestOzoneContainer.testOzoneContainerViaDataNode
>   TestOzoneContainer.testXcieverClientAsync
>   TestOzoneContainer.testCreateOzoneContainer
>   TestOzoneContainer.testDeleteContainer
>   TestContainerServer.testClientServer
>   TestContainerServer.testClientServerWithContainerDispatcher
>   TestKeys.testPutAndGetKeyWithDnRestart
> {code}
> This is based on a comment from [~vagarychen] in HDFS-11580.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12310) [SPS]: Provide an option to track the status of in progress requests

2017-08-16 Thread Uma Maheswara Rao G (JIRA)
Uma Maheswara Rao G created HDFS-12310:
--

 Summary: [SPS]: Provide an option to track the status of in 
progress requests
 Key: HDFS-12310
 URL: https://issues.apache.org/jira/browse/HDFS-12310
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Uma Maheswara Rao G


As per the [~andrew.wang] 's review comments in HDFS-10285, This is the JIRA 
for tracking about the options how we track the progress of SPS requests.

 





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12309) Incorrect null check in the AdminHelper.java

2017-08-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129414#comment-16129414
 ] 

ASF GitHub Bot commented on HDFS-12309:
---

GitHub user dosoft opened a pull request:

https://github.com/apache/hadoop/pull/264

HDFS-12309. Fixed incorrect checkNotNull()



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dosoft/hadoop HDFS-12309

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/264.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #264


commit cd77a28bb7826d0cc77da365f21cd05f9fade7e0
Author: Oleg Danilov 
Date:   2017-08-16T21:04:47Z

HDFS-12309. Fixed incorrect checkNotNull()




> Incorrect null check in the AdminHelper.java
> 
>
> Key: HDFS-12309
> URL: https://issues.apache.org/jira/browse/HDFS-12309
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Oleg Danilov
>Priority: Trivial
>
> '!= null' is not required there:
> line 147-150:
> {code:java}
> public HelpCommand(Command[] commands) {
>   Preconditions.checkNotNull(commands != null);
>   this.commands = commands;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12309) Incorrect null check in the AdminHelper.java

2017-08-16 Thread Oleg Danilov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Danilov updated HDFS-12309:

Status: Patch Available  (was: Open)

> Incorrect null check in the AdminHelper.java
> 
>
> Key: HDFS-12309
> URL: https://issues.apache.org/jira/browse/HDFS-12309
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Oleg Danilov
>Priority: Trivial
> Attachments: HDFS-12309.patch
>
>
> '!= null' is not required there:
> line 147-150:
> {code:java}
> public HelpCommand(Command[] commands) {
>   Preconditions.checkNotNull(commands != null);
>   this.commands = commands;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12309) Incorrect null check in the AdminHelper.java

2017-08-16 Thread Oleg Danilov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Danilov updated HDFS-12309:

Attachment: HDFS-12309.patch

> Incorrect null check in the AdminHelper.java
> 
>
> Key: HDFS-12309
> URL: https://issues.apache.org/jira/browse/HDFS-12309
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Oleg Danilov
>Priority: Trivial
> Attachments: HDFS-12309.patch
>
>
> '!= null' is not required there:
> line 147-150:
> {code:java}
> public HelpCommand(Command[] commands) {
>   Preconditions.checkNotNull(commands != null);
>   this.commands = commands;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12238) Ozone: Add valid trace ID check in sendCommandAsync

2017-08-16 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129443#comment-16129443
 ] 

Anu Engineer commented on HDFS-12238:
-

+1, LGTM. [~vagarychen] , [~msingh] I am going to commit this shortly.


> Ozone: Add valid trace ID check in sendCommandAsync
> ---
>
> Key: HDFS-12238
> URL: https://issues.apache.org/jira/browse/HDFS-12238
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Ajay Kumar
>  Labels: newbie
> Attachments: HDFS-12238-HDFS-7240.01.patch, 
> HDFS-12238-HDFS-7240.02.patch, HDFS-12238-HDFS-7240.03.patch
>
>
> In the function {{XceiverClientHandler#sendCommandAsync}} we should add a 
> check 
> {code}
>    if(StringUtils.isEmpty(request.getTraceID())) {
>   throw new IllegalArgumentException("Invalid trace ID");
> }
> {code}
> To ensure that ozone clients always send a valid trace ID. However, when you 
> do that a set of current tests that do add a valid trace ID will fail. So we 
> need to fix these tests too.
> {code}
>   TestContainerMetrics.testContainerMetrics
>   TestOzoneContainer.testBothGetandPutSmallFile
>   TestOzoneContainer.testCloseContainer
>   TestOzoneContainer.testOzoneContainerViaDataNode
>   TestOzoneContainer.testXcieverClientAsync
>   TestOzoneContainer.testCreateOzoneContainer
>   TestOzoneContainer.testDeleteContainer
>   TestContainerServer.testClientServer
>   TestContainerServer.testClientServerWithContainerDispatcher
>   TestKeys.testPutAndGetKeyWithDnRestart
> {code}
> This is based on a comment from [~vagarychen] in HDFS-11580.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12238) Ozone: Add valid trace ID check in sendCommandAsync

2017-08-16 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12238:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

[~ajayydv] Thank you for the contribution. I have committed this to the feature 
branch.


> Ozone: Add valid trace ID check in sendCommandAsync
> ---
>
> Key: HDFS-12238
> URL: https://issues.apache.org/jira/browse/HDFS-12238
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Ajay Kumar
>  Labels: newbie
> Fix For: HDFS-7240
>
> Attachments: HDFS-12238-HDFS-7240.01.patch, 
> HDFS-12238-HDFS-7240.02.patch, HDFS-12238-HDFS-7240.03.patch
>
>
> In the function {{XceiverClientHandler#sendCommandAsync}} we should add a 
> check 
> {code}
>    if(StringUtils.isEmpty(request.getTraceID())) {
>   throw new IllegalArgumentException("Invalid trace ID");
> }
> {code}
> To ensure that ozone clients always send a valid trace ID. However, when you 
> do that a set of current tests that do add a valid trace ID will fail. So we 
> need to fix these tests too.
> {code}
>   TestContainerMetrics.testContainerMetrics
>   TestOzoneContainer.testBothGetandPutSmallFile
>   TestOzoneContainer.testCloseContainer
>   TestOzoneContainer.testOzoneContainerViaDataNode
>   TestOzoneContainer.testXcieverClientAsync
>   TestOzoneContainer.testCreateOzoneContainer
>   TestOzoneContainer.testDeleteContainer
>   TestContainerServer.testClientServer
>   TestContainerServer.testClientServerWithContainerDispatcher
>   TestKeys.testPutAndGetKeyWithDnRestart
> {code}
> This is based on a comment from [~vagarychen] in HDFS-11580.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12305) Ozone: SCM: Add StateMachine for pipeline/container

2017-08-16 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129462#comment-16129462
 ] 

Anu Engineer commented on HDFS-12305:
-

+1, Looks excellent. Thanks for this framework.



> Ozone: SCM: Add StateMachine for pipeline/container
> ---
>
> Key: HDFS-12305
> URL: https://issues.apache.org/jira/browse/HDFS-12305
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-12305-HDFS-7240.001.patch
>
>
> Add a template class that can be shared by pipeline and container state 
> machine.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12305) Ozone: SCM: Add StateMachine for pipeline/container

2017-08-16 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12305:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: HDFS-7240
Target Version/s: HDFS-7240
Tags: ozone
  Status: Resolved  (was: Patch Available)

[~xyao] Thank you for the contribution. I have committed this to the feature 
branch.

> Ozone: SCM: Add StateMachine for pipeline/container
> ---
>
> Key: HDFS-12305
> URL: https://issues.apache.org/jira/browse/HDFS-12305
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: HDFS-7240
>
> Attachments: HDFS-12305-HDFS-7240.001.patch
>
>
> Add a template class that can be shared by pipeline and container state 
> machine.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode

2017-08-16 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129514#comment-16129514
 ] 

Uma Maheswara Rao G commented on HDFS-10285:


[~anoopsamjohn] awesome, thank you so much for providing the use cases from 
HBase perspective.

> Storage Policy Satisfier in Namenode
> 
>
> Key: HDFS-10285
> URL: https://issues.apache.org/jira/browse/HDFS-10285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10285-consolidated-merge-patch-00.patch, 
> HDFS-10285-consolidated-merge-patch-01.patch, 
> HDFS-SPS-TestReport-20170708.pdf, 
> Storage-Policy-Satisfier-in-HDFS-June-20-2017.pdf, 
> Storage-Policy-Satisfier-in-HDFS-May10.pdf
>
>
> Heterogeneous storage in HDFS introduced the concept of storage policy. These 
> policies can be set on directory/file to specify the user preference, where 
> to store the physical block. When user set the storage policy before writing 
> data, then the blocks could take advantage of storage policy preferences and 
> stores physical block accordingly. 
> If user set the storage policy after writing and completing the file, then 
> the blocks would have been written with default storage policy (nothing but 
> DISK). User has to run the ‘Mover tool’ explicitly by specifying all such 
> file names as a list. In some distributed system scenarios (ex: HBase) it 
> would be difficult to collect all the files and run the tool as different 
> nodes can write files separately and file can have different paths.
> Another scenarios is, when user rename the files from one effected storage 
> policy file (inherited policy from parent directory) to another storage 
> policy effected directory, it will not copy inherited storage policy from 
> source. So it will take effect from destination file/dir parent storage 
> policy. This rename operation is just a metadata change in Namenode. The 
> physical blocks still remain with source storage policy.
> So, Tracking all such business logic based file names could be difficult for 
> admins from distributed nodes(ex: region servers) and running the Mover tool. 
> Here the proposal is to provide an API from Namenode itself for trigger the 
> storage policy satisfaction. A Daemon thread inside Namenode should track 
> such calls and process to DN as movement commands. 
> Will post the detailed design thoughts document soon. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode

2017-08-16 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129525#comment-16129525
 ] 

Uma Maheswara Rao G commented on HDFS-10285:


Hi [~andrew.wang] Thank you for helping us a lot in reviews. Really great 
points.
{quote}
This would be a user periodically asking for status. From what I know of async 
API design, callbacks are preferred over polling since it solves the question 
about how long the server needs to hold the status.
I'd be open to any proposal here, I just think the current "isSpsRunning" API 
is insufficient. Did you end up filing a ticket to track this?
{quote}
ASYNC API design perspective, I agree, systems would have callback register 
APIs . I think we don't have that call back mechanism design's in place HDFS. 
In this particular case, we don't actually process anything for user is 
waiting, this is just a trigger to system to start some inbuilt functionality. 
In fact isSpsRunning API was added just for users to make sure inbuilt SPS is 
not running if they want to run Mover tool explicitly. I filed a JIRA 
HDFS-12310 to discuss more. I really don't know its a good idea to encourage 
users to periodically poll on the system for this status. IMO, if movements are 
really failing(probably some storages are unavailable or some storages failed 
etc), there is definitely an administrator actions required instead of user 
component knowing the status and taking actions itself. So, strongly believe 
reporting failures as metrics will definitely get into admins attention on the 
system. Since we don't want to enable it as auto movement at first stage, there 
should be come trigger to start the movement. Some work happening related to 
async HDFS API at HDFS-9924, probably we could take some design thoughts from 
there once they are in for status API? 
Also another argument is that, We already have async fashioned APIs, example 
delete or setReplication. Even for NN call perspective they may be sync calls, 
but for user perspective, still lot of work happens asynchronously. If we 
delete file, it does NN cleanup and add blocks for deletions. All the blocks 
deletions happens asynchronously. User believe HDFS that data will be cleaned, 
we don't have status reporting API. 
if we change the replication, we change it in NN and eventually replication 
will be triggered, I don't think users will poll on replication is done or not. 
As Its HDFS functionality to replicate, he just rely on it. If replications are 
failing, then definitely admin actions required to fix them.  Usually admins 
depends on fsck or metrics. Lets discuss more on that JIRA HDFS-12310?
At the end don't say we should not have status reporting.I feel that's a good 
to have requirement.
Do you have some use cases on how the application system(ex: Hbase, 
[~anoopsamjohn] has provided some useless above to use SPS) reacts on status 
results? 

{quote}
If I were to paraphrase, the NN is the ultimate arbiter, and the operations 
being performed by C-DNs are idempotent, so duplicate work gets dropped safely. 
I think this still makes it harder to reason about from a debugging POV, 
particularly if we want to extend this to something like EC conversion that 
might not be idempotent.
{quote}
Similar to C-DN way only we are doing reconstructions work in EC already. All 
block group blocks will be reconstructed at on DN. there also that node will be 
choses loosely. Here we just Named as C-DN and sending more blocks as logical 
batch(in this case all blocks associated to a file). In EC case, we are send a 
block group blocks. Coming to idempotent , even today we are just doing in 
idempotent way in EC-reconstruction. I feel we can definitely handle that 
cases, as conversion of while file should complete and then only we can convert 
contiguous blocks to stripe mode at NN. Whoever finish first that will be 
updated to NN. Once NN already converted the blocks, it should not accept newly 
converted block groups. But this should be anyway different discussion. I just 
wanted to pointed out another use case  HDFS-12090, I see that JIRA wants to 
adopt this model to move work.

{quote}
I like the idea of offloading work in the abstract, but I don't know how much 
work we really offload in this situation. The NN still needs to track 
everything at the file level, which is the same order of magnitude as the block 
level. The NN is still doing blockmanagement and processing IBRs for the block 
movement. Distributing tracking work to the C-DNs adds latency and makes the 
system more complicated.
{quote}
I don't see any extra latencies involved really. Anyway work has to be send to 
DNs individually. Along with that, we send batch to one DN first, that DN does 
its work as well as ask other DNs to transfer the blocks. Handling block level 
still keeps the requirement of tracking at files/directories level to make sure 

[jira] [Updated] (HDFS-12159) Ozone: SCM: Add create replication pipeline RPC

2017-08-16 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12159:

Attachment: HDFS-12159-HDFS-7240.006.patch

Uploading patch v6. 

Changes from v5 are:
* Fixed the test issue mentioned by [~xyao]
* Fixed 2 checkstyle warnings
* Fixed a Java Doc issue

> Ozone: SCM: Add create replication pipeline RPC
> ---
>
> Key: HDFS-12159
> URL: https://issues.apache.org/jira/browse/HDFS-12159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-12159-HDFS-7240.001.patch, 
> HDFS-12159-HDFS-7240.002.patch, HDFS-12159-HDFS-7240.003.patch, 
> HDFS-12159-HDFS-7240.004.patch, HDFS-12159-HDFS-7240.005.patch, 
> HDFS-12159-HDFS-7240.006.patch
>
>
> Add an API that allows users to create replication pipelines using SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12238) Ozone: Add valid trace ID check in sendCommandAsync

2017-08-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129550#comment-16129550
 ] 

Hadoop QA commented on HDFS-12238:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
23s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 38s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
18s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.scm.TestArchive |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshot |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
| Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeysRatis |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12238 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882193/HDFS-12238-HDFS-7240.03.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fe532fd478a8 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 63edc5b |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20725/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20725/artifact/patchprocess/patc

[jira] [Commented] (HDFS-12306) [branch-2]Separate class InnerNode from class NetworkTopology and make it extendable

2017-08-16 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129565#comment-16129565
 ] 

Chen Liang commented on HDFS-12306:
---

[~szetszwo] could you please take a look? thanks!

> [branch-2]Separate class InnerNode from class NetworkTopology and make it 
> extendable
> 
>
> Key: HDFS-12306
> URL: https://issues.apache.org/jira/browse/HDFS-12306
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12306-branch-2.001.patch
>
>
> This JIRA is to backport HDFS-11430 to branch-2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12309) Incorrect null check in the AdminHelper.java

2017-08-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129567#comment-16129567
 ] 

Hadoop QA commented on HDFS-12309:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12309 |
| GITHUB PR | https://github.com/apache/hadoop/pull/264 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 283801c82d89 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / de462da |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20726/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20726/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20726/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Incorrect null check in the AdminHelper.java
> 
>
> Key: HDFS-12309
> URL: https://issues.apache.org/jira/browse/HDFS-12309
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Oleg Danilov
>Priority: Trivial
> Attachments: HDFS-12

[jira] [Comment Edited] (HDFS-12293) DataNode should log file name on disk error

2017-08-16 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16126790#comment-16126790
 ] 

Ajay Kumar edited comment on HDFS-12293 at 8/16/17 10:51 PM:
-

[~jojochuang], Could you plz review the patch, test failures in jenkins are 
unrelated.


was (Author: ajayydv):
[~jojochuang], Plz review the patch, test failures in jenkins are unrelated.

> DataNode should log file name on disk error
> ---
>
> Key: HDFS-12293
> URL: https://issues.apache.org/jira/browse/HDFS-12293
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Ajay Kumar
>  Labels: newbie
> Attachments: HDFS-12293.01.patch
>
>
> Found the following error message in precommit build 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/488/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailureReporting/testSuccessiveVolumeFailures/
> {noformat}
> 2017-08-10 09:36:53,619 [DataXceiver for client 
> DFSClient_NONMAPREDUCE_670847838_18 at /127.0.0.1:55851 [Receiving block 
> BP-219227751-172.17.0.2-1502357801473:blk_1073741829_1005]] WARN  
> datanode.DataNode (BlockReceiver.java:(287)) - IOException in 
> BlockReceiver constructor. Cause is 
> java.io.IOException: Not a directory
>   at java.io.UnixFileSystem.createFileExclusively(Native Method)
>   at java.io.File.createNewFile(File.java:1012)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider.createFile(FileIoProvider.java:302)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createFileWithExistsCheck(DatanodeUtil.java:69)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:306)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:933)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbw(FsVolumeImpl.java:1202)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1356)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:215)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver(DataXceiver.java:1291)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:758)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
> {noformat}
> It is not known what file was being created.
> What's interesting is that {{DatanodeUtil#createFileWithExistsCheck}} does 
> carry file name in the exception message, but the exception handler at 
> {{DataTransfer#run()}} and {{BlockReceiver#BlockReceiver}} ignores it:
> {code:title=BlockReceiver#BlockReceiver}
>   // check if there is a disk error
>   IOException cause = DatanodeUtil.getCauseIfDiskError(ioe);
>   DataNode.LOG.warn("IOException in BlockReceiver constructor"
>   + (cause == null ? "" : ". Cause is "), cause);
>   if (cause != null) {
> ioe = cause;
> // Volume error check moved to FileIoProvider
>   }
> {code}
> The logger should print the file name in addition to the cause.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12273) Federation UI

2017-08-16 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129575#comment-16129575
 ] 

Giovanni Matteo Fumarola commented on HDFS-12273:
-

[~goiri], it would be great if you add some screenshots of the UI.

> Federation UI
> -
>
> Key: HDFS-12273
> URL: https://issues.apache.org/jira/browse/HDFS-12273
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Fix For: HDFS-10467
>
> Attachments: HDFS-12273-HDFS-10467-000.patch, 
> HDFS-12273-HDFS-10467-001.patch
>
>
> Add the Web UI to the Router to expose the status of the federated cluster. 
> It includes the federation metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12273) Federation UI

2017-08-16 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129575#comment-16129575
 ] 

Giovanni Matteo Fumarola edited comment on HDFS-12273 at 8/16/17 10:58 PM:
---

[~elgoiri] it would be great if you add some screenshots of the UI.


was (Author: giovanni.fumarola):
[~goiri], it would be great if you add some screenshots of the UI.

> Federation UI
> -
>
> Key: HDFS-12273
> URL: https://issues.apache.org/jira/browse/HDFS-12273
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Fix For: HDFS-10467
>
> Attachments: HDFS-12273-HDFS-10467-000.patch, 
> HDFS-12273-HDFS-10467-001.patch
>
>
> Add the Web UI to the Router to expose the status of the federated cluster. 
> It includes the federation metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12273) Federation UI

2017-08-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12273:
---
Attachment: federationUI-1.png
federationUI-2.png
federationUI-3.png

Screenshots from development cluster.
Removed some potentially confidential information.

> Federation UI
> -
>
> Key: HDFS-12273
> URL: https://issues.apache.org/jira/browse/HDFS-12273
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Fix For: HDFS-10467
>
> Attachments: federationUI-1.png, federationUI-2.png, 
> federationUI-3.png, HDFS-12273-HDFS-10467-000.patch, 
> HDFS-12273-HDFS-10467-001.patch
>
>
> Add the Web UI to the Router to expose the status of the federated cluster. 
> It includes the federation metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11430) Separate class InnerNode from class NetworkTopology and make it extendable

2017-08-16 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-11430:
---
Fix Version/s: 2.9.0

Thanks, Chen.

I just have merged this to branch-2.

> Separate class InnerNode from class NetworkTopology and make it extendable
> --
>
> Key: HDFS-11430
> URL: https://issues.apache.org/jira/browse/HDFS-11430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: h11430_20170217.patch, h11430_20170218.patch, 
> HDFS-11430-branch-2.001.patch
>
>
> The approach we will take in HDFS-11419 is to annotate topology's inner node 
> with more information, such that it chooses a subtree that meets storage type 
> requirement. However, {{InnerNode}} is not specific to HDFS, so our change 
> should affect other components using this class.
> This JIRA separates {{InnerNode}} out of {{NetworkTopology}} and makes it 
> extendable. Therefore HDFS can have it's own customized inner node class, 
> while other services can still have inner node as what it is right now.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12273) Federation UI

2017-08-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12273:
---
Attachment: federationUI-1.png

> Federation UI
> -
>
> Key: HDFS-12273
> URL: https://issues.apache.org/jira/browse/HDFS-12273
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Fix For: HDFS-10467
>
> Attachments: federationUI-1.png, federationUI-2.png, 
> federationUI-3.png, HDFS-12273-HDFS-10467-000.patch, 
> HDFS-12273-HDFS-10467-001.patch
>
>
> Add the Web UI to the Router to expose the status of the federated cluster. 
> It includes the federation metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12273) Federation UI

2017-08-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12273:
---
Attachment: (was: federationUI-1.png)

> Federation UI
> -
>
> Key: HDFS-12273
> URL: https://issues.apache.org/jira/browse/HDFS-12273
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Fix For: HDFS-10467
>
> Attachments: federationUI-1.png, federationUI-2.png, 
> federationUI-3.png, HDFS-12273-HDFS-10467-000.patch, 
> HDFS-12273-HDFS-10467-001.patch
>
>
> Add the Web UI to the Router to expose the status of the federated cluster. 
> It includes the federation metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12306) [branch-2]Separate class InnerNode from class NetworkTopology and make it extendable

2017-08-16 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-12306:
---
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

I have cherry picked HDFS-11430 to branch-2.  Resolving this...

> [branch-2]Separate class InnerNode from class NetworkTopology and make it 
> extendable
> 
>
> Key: HDFS-12306
> URL: https://issues.apache.org/jira/browse/HDFS-12306
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12306-branch-2.001.patch
>
>
> This JIRA is to backport HDFS-11430 to branch-2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12311) [branch-2] HDFS specific network topology classes with storage type info included

2017-08-16 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12311:
--
Attachment: HDFS-12311-branch-2.001.patch

> [branch-2] HDFS specific network topology classes with storage type info 
> included
> -
>
> Key: HDFS-12311
> URL: https://issues.apache.org/jira/browse/HDFS-12311
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12311-branch-2.001.patch
>
>
> This JIRA is to backport HDFS-11450 to branch 2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12311) [branch-2] HDFS specific network topology classes with storage type info included

2017-08-16 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12311:
--
Status: Patch Available  (was: Open)

> [branch-2] HDFS specific network topology classes with storage type info 
> included
> -
>
> Key: HDFS-12311
> URL: https://issues.apache.org/jira/browse/HDFS-12311
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12311-branch-2.001.patch
>
>
> This JIRA is to backport HDFS-11450 to branch 2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12311) [branch-2] HDFS specific network topology classes with storage type info included

2017-08-16 Thread Chen Liang (JIRA)
Chen Liang created HDFS-12311:
-

 Summary: [branch-2] HDFS specific network topology classes with 
storage type info included
 Key: HDFS-12311
 URL: https://issues.apache.org/jira/browse/HDFS-12311
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Chen Liang
Assignee: Chen Liang


This JIRA is to backport HDFS-11450 to branch 2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11554) [Documentation] Router-based federation documentation

2017-08-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-11554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-11554:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-10467
   Status: Resolved  (was: Patch Available)

> [Documentation] Router-based federation documentation
> -
>
> Key: HDFS-11554
> URL: https://issues.apache.org/jira/browse/HDFS-11554
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: HDFS-10467
>
> Attachments: HDFS-11554-000.patch, HDFS-11554-001.patch, 
> HDFS-11554.002.patch, HDFS-11554-HDFS-10467-003.patch, routerfederation.png
>
>
> Documentation describing Router-based HDFS federation (e.g., how to start a 
> federated cluster).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11554) [Documentation] Router-based federation documentation

2017-08-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-11554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129672#comment-16129672
 ] 

Íñigo Goiri commented on HDFS-11554:


Committed to HDFS-10467. Thanks [~chris.douglas] for the review!

> [Documentation] Router-based federation documentation
> -
>
> Key: HDFS-11554
> URL: https://issues.apache.org/jira/browse/HDFS-11554
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: HDFS-10467
>
> Attachments: HDFS-11554-000.patch, HDFS-11554-001.patch, 
> HDFS-11554.002.patch, HDFS-11554-HDFS-10467-003.patch, routerfederation.png
>
>
> Documentation describing Router-based HDFS federation (e.g., how to start a 
> federated cluster).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12312) Rebasing HDFS-10467 (2)

2017-08-16 Thread JIRA
Íñigo Goiri created HDFS-12312:
--

 Summary: Rebasing HDFS-10467 (2)
 Key: HDFS-12312
 URL: https://issues.apache.org/jira/browse/HDFS-12312
 Project: Hadoop HDFS
  Issue Type: Bug
 Environment: Fixing problems with the rebase of HDFS-10467:
# New changes added to {{HdfsFileStatus}}
# An error in rebase with {{bin/hdfs}}
Reporter: Íñigo Goiri
Assignee: Íñigo Goiri






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12312) Rebasing HDFS-10467 (2)

2017-08-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12312:
---
Attachment: HDFS-12312-HDFS-10467.patch

> Rebasing HDFS-10467 (2)
> ---
>
> Key: HDFS-12312
> URL: https://issues.apache.org/jira/browse/HDFS-12312
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: Fixing problems with the rebase of HDFS-10467:
> # New changes added to {{HdfsFileStatus}}
> # An error in rebase with {{bin/hdfs}}
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HDFS-12312-HDFS-10467.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12312) Rebasing HDFS-10467 (2)

2017-08-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12312:
---
Environment: (was: Fixing problems with the rebase of HDFS-10467:
# New changes added to {{HdfsFileStatus}}
# An error in rebase with {{bin/hdfs}})

> Rebasing HDFS-10467 (2)
> ---
>
> Key: HDFS-12312
> URL: https://issues.apache.org/jira/browse/HDFS-12312
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HDFS-12312-HDFS-10467.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12312) Rebasing HDFS-10467 (2)

2017-08-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12312:
---
Description: 
Fixing problems with the rebase of HDFS-10467:
# New changes added to {{HdfsFileStatus}}
# An error in rebase with {{bin/hdfs}}

> Rebasing HDFS-10467 (2)
> ---
>
> Key: HDFS-12312
> URL: https://issues.apache.org/jira/browse/HDFS-12312
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HDFS-12312-HDFS-10467.patch
>
>
> Fixing problems with the rebase of HDFS-10467:
> # New changes added to {{HdfsFileStatus}}
> # An error in rebase with {{bin/hdfs}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12312) Rebasing HDFS-10467 (2)

2017-08-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HDFS-12312.

   Resolution: Fixed
Fix Version/s: HDFS-10467

> Rebasing HDFS-10467 (2)
> ---
>
> Key: HDFS-12312
> URL: https://issues.apache.org/jira/browse/HDFS-12312
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Fix For: HDFS-10467
>
> Attachments: HDFS-12312-HDFS-10467.patch
>
>
> Fixing problems with the rebase of HDFS-10467:
> # New changes added to {{HdfsFileStatus}}
> # An error in rebase with {{bin/hdfs}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12159) Ozone: SCM: Add create replication pipeline RPC

2017-08-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129691#comment-16129691
 ] 

Hadoop QA commented on HDFS-12159:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 26 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
26s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
22s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
5s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 71 
unchanged - 1 fixed = 71 total (was 72) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 43s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m  4s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.scm.TestArchive |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12159 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882213/HDFS-12159-HDFS-7240.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  |
| uname | Linux 90ff9a9227cf 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 293c425 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1

[jira] [Commented] (HDFS-12312) Rebasing HDFS-10467 (2)

2017-08-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129689#comment-16129689
 ] 

Íñigo Goiri commented on HDFS-12312:


Not waiting for jenkins because the branch is broken. Committed.

> Rebasing HDFS-10467 (2)
> ---
>
> Key: HDFS-12312
> URL: https://issues.apache.org/jira/browse/HDFS-12312
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Fix For: HDFS-10467
>
> Attachments: HDFS-12312-HDFS-10467.patch
>
>
> Fixing problems with the rebase of HDFS-10467:
> # New changes added to {{HdfsFileStatus}}
> # An error in rebase with {{bin/hdfs}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12312) Rebasing HDFS-10467 (2)

2017-08-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12312:
---
Issue Type: Sub-task  (was: Bug)
Parent: HDFS-10467

> Rebasing HDFS-10467 (2)
> ---
>
> Key: HDFS-12312
> URL: https://issues.apache.org/jira/browse/HDFS-12312
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Fix For: HDFS-10467
>
> Attachments: HDFS-12312-HDFS-10467.patch
>
>
> Fixing problems with the rebase of HDFS-10467:
> # New changes added to {{HdfsFileStatus}}
> # An error in rebase with {{bin/hdfs}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10631) Federation State Store ZooKeeper implementation

2017-08-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-10631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-10631:
---
Attachment: HDFS-10631-HDFS-10467-008.patch

# Rebasing after HDFS-12312.
# Fixed most of [~subru] comments

> Federation State Store ZooKeeper implementation
> ---
>
> Key: HDFS-10631
> URL: https://issues.apache.org/jira/browse/HDFS-10631
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10631-HDFS-10467-001.patch, 
> HDFS-10631-HDFS-10467-002.patch, HDFS-10631-HDFS-10467-003.patch, 
> HDFS-10631-HDFS-10467-004.patch, HDFS-10631-HDFS-10467-005.patch, 
> HDFS-10631-HDFS-10467-006.patch, HDFS-10631-HDFS-10467-007.patch, 
> HDFS-10631-HDFS-10467-008.patch
>
>
> State Store implementation using ZooKeeper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11885) createEncryptionZone should not block on initializing EDEK cache

2017-08-16 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129720#comment-16129720
 ] 

Andrew Wang commented on HDFS-11885:


Hey [~shahrs87], are you planning to post the patch that Daryn mentioned above? 
Would like to move forward on this issue.

> createEncryptionZone should not block on initializing EDEK cache
> 
>
> Key: HDFS-11885
> URL: https://issues.apache.org/jira/browse/HDFS-11885
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: HDFS-11885.001.patch, HDFS-11885.002.patch, 
> HDFS-11885.003.patch, HDFS-11885.004.patch
>
>
> When creating an encryption zone, we call {{ensureKeyIsInitialized}}, which 
> calls {{provider.warmUpEncryptedKeys(keyName)}}. This is a blocking call, 
> which attempts to fill the key cache up to the low watermark.
> If the KMS is down or slow, this can take a very long time, and cause the 
> createZone RPC to fail with a timeout.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-08-16 Thread George Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129736#comment-16129736
 ] 

George Huang commented on HDFS-11912:
-

Test randomly generated number of iterations for the current run. Test may time 
out if the overall operation takes too long. I'm reducing the max number of 
iterations and executed locally many times without timeout.

> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Assignee: George Huang
>Priority: Minor
>  Labels: TestGap
> Attachments: HDFS-11912.001.patch, HDFS-11912.002.patch, 
> HDFS-11912.003.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-08-16 Thread George Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129736#comment-16129736
 ] 

George Huang edited comment on HDFS-11912 at 8/17/17 1:29 AM:
--

Hi Manoj,

Thank you so much for the comment.

Test randomly generated number of iterations to execute. It may time out if the 
overall operation takes too long. I'm reducing the max number of iterations and 
executed locally many times without timeout.

Also fixed checkstyle issues listed.

Many thanks,
George


was (Author: ghuangups):
Test randomly generated number of iterations for the current run. Test may time 
out if the overall operation takes too long. I'm reducing the max number of 
iterations and executed locally many times without timeout.

> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Assignee: George Huang
>Priority: Minor
>  Labels: TestGap
> Attachments: HDFS-11912.001.patch, HDFS-11912.002.patch, 
> HDFS-11912.003.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-08-16 Thread George Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

George Huang updated HDFS-11912:

Attachment: HDFS-11912.004.patch

> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Assignee: George Huang
>Priority: Minor
>  Labels: TestGap
> Attachments: HDFS-11912.001.patch, HDFS-11912.002.patch, 
> HDFS-11912.003.patch, HDFS-11912.004.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12159) Ozone: SCM: Add create replication pipeline RPC

2017-08-16 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129751#comment-16129751
 ] 

Xiaoyu Yao commented on HDFS-12159:
---

Thanks Anu for the update. +1 v6 patch.

> Ozone: SCM: Add create replication pipeline RPC
> ---
>
> Key: HDFS-12159
> URL: https://issues.apache.org/jira/browse/HDFS-12159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-12159-HDFS-7240.001.patch, 
> HDFS-12159-HDFS-7240.002.patch, HDFS-12159-HDFS-7240.003.patch, 
> HDFS-12159-HDFS-7240.004.patch, HDFS-12159-HDFS-7240.005.patch, 
> HDFS-12159-HDFS-7240.006.patch
>
>
> Add an API that allows users to create replication pipelines using SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12302) FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl object

2017-08-16 Thread liaoyuxiangqin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129759#comment-16129759
 ] 

liaoyuxiangqin commented on HDFS-12302:
---

bq. [~vagarychen]Thanks for your review, as you say the 
FsVolumeImpl#addBlockPool gets called, this is where bqSlices get added to 
fsVolue. And FsVolumeImpl#getAllVolumesMap which  actually add ReplicaMap  
entries to bqSlices. 

> FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl 
> object
> ---
>
> Key: HDFS-12302
> URL: https://issues.apache.org/jira/browse/HDFS-12302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
> Environment: cluster: 3 nodes
> os:(Red Hat 2.6.33.20, Red Hat 3.10.0-514.6.1.el7.x86_64, 
> Ubuntu4.4.0-31-generic)
> hadoop version: hadoop-3.0.0-alpha4
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
> Attachments: HDFS-12302.001.patch, HDFS-12302.002.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
>   When i read the  code of Instantiate FsDatasetImpl object on DataNode 
> start process, i find that the getVolumeMap function actually can't get 
> ReplicaMap info for each fsVolume, the reason is fsVolume's  bpSlices hasn't 
> been initialized in this time, the detail code as follows:
> {code:title=FsVolumeImpl.java}
> void getVolumeMap(ReplicaMap volumeMap,
> final RamDiskReplicaTracker ramDiskReplicaMap)
>   throws IOException {
> LOG.info("Added volume -  getVolumeMap bpSlices:" + 
> bpSlices.values().size());
> for(BlockPoolSlice s : bpSlices.values()) {
>   s.getVolumeMap(volumeMap, ramDiskReplicaMap);
> }
>   }
> {code}
> Then, i have add some info log and start DataNode, the log info cord with the 
> code description, the detail log info as follows:
>  INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap end
> INFO: Added new volume: DS-48ac6ef9-fd6f-49b7-a5fb-77b82cadc973
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> end
> INFO: Added new volume: DS-159b615c-144c-4d99-8b63-5f37247fb8ed
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK
> At last i think the getVolumeMap process for each fsVloume not necessary when 
> Instantiate FsDatasetImpl object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12302) FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl object

2017-08-16 Thread liaoyuxiangqin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129759#comment-16129759
 ] 

liaoyuxiangqin edited comment on HDFS-12302 at 8/17/17 1:59 AM:


[~vagarychen]Thanks for your review, as you say the FsVolumeImpl#addBlockPool 
gets called, this is where bqSlices get added to fsVolue. And 
FsVolumeImpl#getAllVolumesMap which  actually add ReplicaMap  entries to 
bqSlices. 


was (Author: liaoyuxiangqin):
bq. [~vagarychen]Thanks for your review, as you say the 
FsVolumeImpl#addBlockPool gets called, this is where bqSlices get added to 
fsVolue. And FsVolumeImpl#getAllVolumesMap which  actually add ReplicaMap  
entries to bqSlices. 

> FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl 
> object
> ---
>
> Key: HDFS-12302
> URL: https://issues.apache.org/jira/browse/HDFS-12302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
> Environment: cluster: 3 nodes
> os:(Red Hat 2.6.33.20, Red Hat 3.10.0-514.6.1.el7.x86_64, 
> Ubuntu4.4.0-31-generic)
> hadoop version: hadoop-3.0.0-alpha4
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
> Attachments: HDFS-12302.001.patch, HDFS-12302.002.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
>   When i read the  code of Instantiate FsDatasetImpl object on DataNode 
> start process, i find that the getVolumeMap function actually can't get 
> ReplicaMap info for each fsVolume, the reason is fsVolume's  bpSlices hasn't 
> been initialized in this time, the detail code as follows:
> {code:title=FsVolumeImpl.java}
> void getVolumeMap(ReplicaMap volumeMap,
> final RamDiskReplicaTracker ramDiskReplicaMap)
>   throws IOException {
> LOG.info("Added volume -  getVolumeMap bpSlices:" + 
> bpSlices.values().size());
> for(BlockPoolSlice s : bpSlices.values()) {
>   s.getVolumeMap(volumeMap, ramDiskReplicaMap);
> }
>   }
> {code}
> Then, i have add some info log and start DataNode, the log info cord with the 
> code description, the detail log info as follows:
>  INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap end
> INFO: Added new volume: DS-48ac6ef9-fd6f-49b7-a5fb-77b82cadc973
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> end
> INFO: Added new volume: DS-159b615c-144c-4d99-8b63-5f37247fb8ed
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK
> At last i think the getVolumeMap process for each fsVloume not necessary when 
> Instantiate FsDatasetImpl object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11082) Erasure Coding : Provide replicated EC policy to just replicating the files

2017-08-16 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129764#comment-16129764
 ] 

SammiChen commented on HDFS-11082:
--

Double checked the failed UTs, not relevant. 

> Erasure Coding : Provide replicated EC policy to just replicating the files
> ---
>
> Key: HDFS-11082
> URL: https://issues.apache.org/jira/browse/HDFS-11082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: SammiChen
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11082.001.patch, HDFS-11082.002.patch, 
> HDFS-11082.003.patch, HDFS-11082.004.patch
>
>
> The idea of this jira is to provide a new {{replicated EC policy}} so that we 
> can override the EC policy on a parent directory and go back to just 
> replicating the files based on replication factors.
> Thanks [~andrew.wang] for the 
> [discussions|https://issues.apache.org/jira/browse/HDFS-11072?focusedCommentId=15620743&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15620743].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10631) Federation State Store ZooKeeper implementation

2017-08-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129773#comment-16129773
 ] 

Hadoop QA commented on HDFS-10631:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10467 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
58s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-10467 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterRpc |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-10631 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882240/HDFS-10631-HDFS-10467-008.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux bd82770a06c3 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10467 / e19f93c |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20731/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20731/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20731/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Federation State Store ZooKeeper implementation
> ---
>
> Key: HDFS-10631
> URL: https://issues.

[jira] [Commented] (HDFS-12283) Ozone: DeleteKey-5: Implement SCM DeletedBlockLog

2017-08-16 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129804#comment-16129804
 ] 

Weiwei Yang commented on HDFS-12283:


Hi [~yuanbo]

The new patch seems to cause a lot of UT failures, can you fix them? Thanks

> Ozone: DeleteKey-5: Implement SCM DeletedBlockLog
> -
>
> Key: HDFS-12283
> URL: https://issues.apache.org/jira/browse/HDFS-12283
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
> Attachments: HDFS-12283.001.patch, HDFS-12283-HDFS-7240.001.patch, 
> HDFS-12283-HDFS-7240.002.patch
>
>
> The DeletedBlockLog is a persisted log in SCM to keep tracking container 
> blocks which are under deletion. It maintains info about under-deletion 
> container blocks that notified by KSM, and the state how it is processed. We 
> can use RocksDB to implement the 1st version of the log, the schema looks like
> ||TxID||ContainerName||Block List||ProcessedCount||
> |0|c1|b1,b2,b3|0|
> |1|c2|b1|3|
> |2|c2|b2, b3|-1|
> Some explanations
> # TxID is an incremental long value transaction ID for ONE container and 
> multiple blocks
> # Container name is the name of the container
> # Block list is a list of block IDs
> # ProcessedCount is the number of times SCM has sent this record to datanode, 
> it represents the "state" of the transaction, it is in range of \[-1, 5\], -1 
> means the transaction eventually failed after some retries, 5 is the max 
> number times of retries.
> We need to define {{DeletedBlockLog}} as an interface and implement this with 
> RocksDB {{MetadataStore}} as the first version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12311) [branch-2] HDFS specific network topology classes with storage type info included

2017-08-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129806#comment-16129806
 ] 

Hadoop QA commented on HDFS-12311:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
42s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
46s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
48s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
42s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 31s{color} | {color:orange} root: The patch generated 10 new + 320 unchanged 
- 4 fixed = 330 total (was 324) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  9s{color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 52m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}186m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_144 Failed junit tests | hadoop.ipc.TestRPC |
|   | hadoop.hdfs.TestMaintenanceState |
| JDK v1.8.0_144 Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
| JDK v1.7.0_131 Failed junit tests | hadoop.ipc.TestRPC |
|   | hadoop.hdfs.server.datanode.TestDat

[jira] [Commented] (HDFS-12307) Ozone: TestKeys#testPutAndGetKeyWithDnRestart fails

2017-08-16 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129811#comment-16129811
 ] 

Weiwei Yang commented on HDFS-12307:


Thanks [~xyao], you are right, this is a dup of HDFS-12216. Lets close this one 
as duplicated. Will move the discussion to that one.

> Ozone: TestKeys#testPutAndGetKeyWithDnRestart fails
> ---
>
> Key: HDFS-12307
> URL: https://issues.apache.org/jira/browse/HDFS-12307
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>
> It seems this UT constantly fails with following error
> {noformat}
> org.apache.hadoop.ozone.web.exceptions.OzoneException: Exception getting 
> XceiverClient.
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:119)
>   at 
> com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createUsingDefault(StdValueInstantiator.java:243)
>   at 
> com.fasterxml.jackson.databind.deser.std.ThrowableDeserializer.deserializeFromObject(ThrowableDeserializer.java:146)
>   at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:133)
>   at 
> com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:1579)
>   at 
> com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:1200)
>   at 
> org.apache.hadoop.ozone.web.exceptions.OzoneException.parse(OzoneException.java:248)
>   at 
> org.apache.hadoop.ozone.web.client.OzoneBucket.executeGetKey(OzoneBucket.java:395)
>   at 
> org.apache.hadoop.ozone.web.client.OzoneBucket.getKey(OzoneBucket.java:321)
>   at 
> org.apache.hadoop.ozone.web.client.TestKeys.runTestPutAndGetKeyWithDnRestart(TestKeys.java:288)
>   at 
> org.apache.hadoop.ozone.web.client.TestKeys.testPutAndGetKeyWithDnRestart(TestKeys.java:265)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12307) Ozone: TestKeys#testPutAndGetKeyWithDnRestart fails

2017-08-16 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang resolved HDFS-12307.

Resolution: Duplicate
  Assignee: Weiwei Yang

> Ozone: TestKeys#testPutAndGetKeyWithDnRestart fails
> ---
>
> Key: HDFS-12307
> URL: https://issues.apache.org/jira/browse/HDFS-12307
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>
> It seems this UT constantly fails with following error
> {noformat}
> org.apache.hadoop.ozone.web.exceptions.OzoneException: Exception getting 
> XceiverClient.
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:119)
>   at 
> com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createUsingDefault(StdValueInstantiator.java:243)
>   at 
> com.fasterxml.jackson.databind.deser.std.ThrowableDeserializer.deserializeFromObject(ThrowableDeserializer.java:146)
>   at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:133)
>   at 
> com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:1579)
>   at 
> com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:1200)
>   at 
> org.apache.hadoop.ozone.web.exceptions.OzoneException.parse(OzoneException.java:248)
>   at 
> org.apache.hadoop.ozone.web.client.OzoneBucket.executeGetKey(OzoneBucket.java:395)
>   at 
> org.apache.hadoop.ozone.web.client.OzoneBucket.getKey(OzoneBucket.java:321)
>   at 
> org.apache.hadoop.ozone.web.client.TestKeys.runTestPutAndGetKeyWithDnRestart(TestKeys.java:288)
>   at 
> org.apache.hadoop.ozone.web.client.TestKeys.testPutAndGetKeyWithDnRestart(TestKeys.java:265)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12216) Ozone: TestKeys and TestKeysRatis are failing consistently

2017-08-16 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129817#comment-16129817
 ] 

Weiwei Yang commented on HDFS-12216:


I agree this is failing since HDFS-12115. Since we are using random ports in 
mini cluster, ozone container will be started on a different port but the 
information persisted in pipeline#datanodeID still refers to the old port. 
[~msingh] any plan to submit a patch to fix this issue soon?

> Ozone: TestKeys and TestKeysRatis are failing consistently
> --
>
> Key: HDFS-12216
> URL: https://issues.apache.org/jira/browse/HDFS-12216
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
>
> TestKeys and TestKeysRatis are failing consistently as noted in test logs for 
> HDFS-12183
> TestKeysRatis is failing because of the following error
> {code}
> 2017-07-28 23:11:28,783 [StateMachineUpdater-127.0.0.1:55793] ERROR 
> impl.StateMachineUpdater (ExitUtils.java:terminate(80)) - Terminating with 
> exit status 2: StateMachineUpdater-127.0.0.1:55793: the StateMachineUpdater 
> hits Throwable
> org.iq80.leveldb.DBException: Closed
>   at org.fusesource.leveldbjni.internal.JniDB.put(JniDB.java:123)
>   at org.apache.hadoop.utils.LevelDBStore.put(LevelDBStore.java:98)
>   at 
> org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl.putKey(KeyManagerImpl.java:90)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.handlePutKey(Dispatcher.java:547)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.keyProcessHandler(Dispatcher.java:206)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.dispatch(Dispatcher.java:110)
>   at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatch(ContainerStateMachine.java:94)
>   at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:81)
>   at 
> org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:913)
>   at 
> org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:142)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> where as TestKeys is failing because of
> {code}
> 2017-07-28 23:14:20,889 [Thread-486] INFO  scm.XceiverClientManager 
> (XceiverClientManager.java:getClient(158)) - exception 
> java.util.concurrent.ExecutionException: java.net.ConnectException: 
> Connection refused: /127.0.0.1:55914
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12216) Ozone: TestKeys and TestKeysRatis are failing consistently

2017-08-16 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129817#comment-16129817
 ] 

Weiwei Yang edited comment on HDFS-12216 at 8/17/17 3:14 AM:
-

I agree this is failing since HDFS-12115. Since we are using random ports in 
mini cluster, ozone container will be started on a different port but the 
information persisted in pipeline#datanodeID still refers to the old port. 
[~msingh] any plan to submit a patch to fix this issue soon? Maybe a possible 
fix is to instead of using random ports, find a free port at the beginning and 
assign that port to {{DFS_CONTAINER_IPC_PORT}}?


was (Author: cheersyang):
I agree this is failing since HDFS-12115. Since we are using random ports in 
mini cluster, ozone container will be started on a different port but the 
information persisted in pipeline#datanodeID still refers to the old port. 
[~msingh] any plan to submit a patch to fix this issue soon?

> Ozone: TestKeys and TestKeysRatis are failing consistently
> --
>
> Key: HDFS-12216
> URL: https://issues.apache.org/jira/browse/HDFS-12216
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
>
> TestKeys and TestKeysRatis are failing consistently as noted in test logs for 
> HDFS-12183
> TestKeysRatis is failing because of the following error
> {code}
> 2017-07-28 23:11:28,783 [StateMachineUpdater-127.0.0.1:55793] ERROR 
> impl.StateMachineUpdater (ExitUtils.java:terminate(80)) - Terminating with 
> exit status 2: StateMachineUpdater-127.0.0.1:55793: the StateMachineUpdater 
> hits Throwable
> org.iq80.leveldb.DBException: Closed
>   at org.fusesource.leveldbjni.internal.JniDB.put(JniDB.java:123)
>   at org.apache.hadoop.utils.LevelDBStore.put(LevelDBStore.java:98)
>   at 
> org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl.putKey(KeyManagerImpl.java:90)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.handlePutKey(Dispatcher.java:547)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.keyProcessHandler(Dispatcher.java:206)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.dispatch(Dispatcher.java:110)
>   at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatch(ContainerStateMachine.java:94)
>   at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:81)
>   at 
> org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:913)
>   at 
> org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:142)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> where as TestKeys is failing because of
> {code}
> 2017-07-28 23:14:20,889 [Thread-486] INFO  scm.XceiverClientManager 
> (XceiverClientManager.java:getClient(158)) - exception 
> java.util.concurrent.ExecutionException: java.net.ConnectException: 
> Connection refused: /127.0.0.1:55914
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-08-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129845#comment-16129845
 ] 

Hadoop QA commented on HDFS-11912:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots |
|   | 
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11912 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882244/HDFS-11912.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b632a242db7d 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ab051bd |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20732/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20732/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20732/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache

[jira] [Commented] (HDFS-12248) SNN will not upload fsimage on IOE and Interrupted exceptions

2017-08-16 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129859#comment-16129859
 ] 

Vinayakumar B commented on HDFS-12248:
--

patch looks almost good.
+1 once these nits are addressed.
1. Add {{timeout=6}} to @Test
2. Fix checkstyle comments.



> SNN will not upload fsimage on IOE and Interrupted exceptions
> -
>
> Key: HDFS-12248
> URL: https://issues.apache.org/jira/browse/HDFS-12248
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-12248-002.patch, HDFS-12248.patch
>
>
> Related to  HDFS-9787. When fsimage uploading to ANN, if there is any 
> interrupt or IOE comes {{isPrimaryCheckPointer}} set to 
> {{false}}.Rollingupgrade triggered same time then It does the checkpoint 
> without sending the fsimage since {{sendRequest}} will be {{false}}.
> So,here {{rollback}} image will not sent to ANN.
> {code}
>   } catch (ExecutionException e) {
> ioe = new IOException("Exception during image upload: " + 
> e.getMessage(),
> e.getCause());
> break;
>   } catch (InterruptedException e) {
> ie = e;
> break;
>   }
> }
> lastUploadTime = monotonicNow();
> // we are primary if we successfully updated the ANN
> this.isPrimaryCheckPointer = success;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11817) A faulty node can cause a lease leak and NPE on accessing data

2017-08-16 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129868#comment-16129868
 ] 

Vinayakumar B commented on HDFS-11817:
--

bq. I feel, this can be merged to branch-2.7 as well.
+1

> A faulty node can cause a lease leak and NPE on accessing data
> --
>
> Key: HDFS-11817
> URL: https://issues.apache.org/jira/browse/HDFS-11817
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HDFS-11817.branch-2.patch, hdfs-11817_supplement.txt, 
> HDFS-11817.v2.branch-2.8.patch, HDFS-11817.v2.branch-2.patch, 
> HDFS-11817.v2.trunk.patch
>
>
> When the namenode performs a lease recovery for a failed write, the 
> {{commitBlockSynchronization()}} will fail, if none of the new target has 
> sent a received-IBR.  At this point, the data is inaccessible, as the 
> namenode will throw a {{NullPointerException}} upon {{getBlockLocations()}}.
> The lease recovery will be retried in about an hour by the namenode. If the 
> nodes are faulty (usually when there is only one new target), they may not 
> block report until this point. If this happens, lease recovery throws an 
> {{AlreadyBeingCreatedException}}, which causes LeaseManager to simply remove 
> the lease without  finalizing the inode.  
> This results in an inconsistent lease state. The inode stays 
> under-construction, but no more lease recovery is attempted. A manual lease 
> recovery is also not allowed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-08-16 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129881#comment-16129881
 ] 

Manoj Govindassamy commented on HDFS-11912:
---

[~ghuangups],
  Checkstyle issues are mostly fixed. Just the last 4 pending. The newly added 
unit test is still failing. Can you please take a look?


> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Assignee: George Huang
>Priority: Minor
>  Labels: TestGap
> Attachments: HDFS-11912.001.patch, HDFS-11912.002.patch, 
> HDFS-11912.003.patch, HDFS-11912.004.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12293) DataNode should log file name on disk error

2017-08-16 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129882#comment-16129882
 ] 

Wei-Chiu Chuang commented on HDFS-12293:


Thanks for the patch, Ajay Kumar.

Looks good. Would you also update DataTransfer#run() as well?

Thanks

> DataNode should log file name on disk error
> ---
>
> Key: HDFS-12293
> URL: https://issues.apache.org/jira/browse/HDFS-12293
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Ajay Kumar
>  Labels: newbie
> Attachments: HDFS-12293.01.patch
>
>
> Found the following error message in precommit build 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/488/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailureReporting/testSuccessiveVolumeFailures/
> {noformat}
> 2017-08-10 09:36:53,619 [DataXceiver for client 
> DFSClient_NONMAPREDUCE_670847838_18 at /127.0.0.1:55851 [Receiving block 
> BP-219227751-172.17.0.2-1502357801473:blk_1073741829_1005]] WARN  
> datanode.DataNode (BlockReceiver.java:(287)) - IOException in 
> BlockReceiver constructor. Cause is 
> java.io.IOException: Not a directory
>   at java.io.UnixFileSystem.createFileExclusively(Native Method)
>   at java.io.File.createNewFile(File.java:1012)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider.createFile(FileIoProvider.java:302)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createFileWithExistsCheck(DatanodeUtil.java:69)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:306)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:933)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbw(FsVolumeImpl.java:1202)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1356)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:215)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver(DataXceiver.java:1291)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:758)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
> {noformat}
> It is not known what file was being created.
> What's interesting is that {{DatanodeUtil#createFileWithExistsCheck}} does 
> carry file name in the exception message, but the exception handler at 
> {{DataTransfer#run()}} and {{BlockReceiver#BlockReceiver}} ignores it:
> {code:title=BlockReceiver#BlockReceiver}
>   // check if there is a disk error
>   IOException cause = DatanodeUtil.getCauseIfDiskError(ioe);
>   DataNode.LOG.warn("IOException in BlockReceiver constructor"
>   + (cause == null ? "" : ". Cause is "), cause);
>   if (cause != null) {
> ioe = cause;
> // Volume error check moved to FileIoProvider
>   }
> {code}
> The logger should print the file name in addition to the cause.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-08-16 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129881#comment-16129881
 ] 

Manoj Govindassamy edited comment on HDFS-11912 at 8/17/17 4:18 AM:


[~ghuangups],
Thanks much for working on the patch revision. Checkstyle issues are mostly 
fixed. Just the last 4 pending. The newly added unit test is still failing. Can 
you please take a look?



was (Author: manojg):
[~ghuangups],
  Checkstyle issues are mostly fixed. Just the last 4 pending. The newly added 
unit test is still failing. Can you please take a look?


> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Assignee: George Huang
>Priority: Minor
>  Labels: TestGap
> Attachments: HDFS-11912.001.patch, HDFS-11912.002.patch, 
> HDFS-11912.003.patch, HDFS-11912.004.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12269) Better to return a Map rather than HashMap in getErasureCodingCodecs

2017-08-16 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129884#comment-16129884
 ] 

Akira Ajisaka commented on HDFS-12269:
--

+1, I ran the failed tests locally and all of them passed.

> Better to return a Map rather than HashMap in getErasureCodingCodecs
> 
>
> Key: HDFS-12269
> URL: https://issues.apache.org/jira/browse/HDFS-12269
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: Huafeng Wang
>Assignee: Huafeng Wang
>Priority: Minor
> Attachments: HDFS-12269.001.patch, HDFS-12269.002.patch, 
> HDFS-12269.003.patch
>
>
> Currently the getErasureCodingCodecs function defined in ClientProtocal 
> returns a Hashmap:
> {code:java}
>   HashMap getErasureCodingCodecs() throws IOException;
> {code}
> It's better to return a Map.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12269) Better to return a Map rather than HashMap in getErasureCodingCodecs

2017-08-16 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-12269:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~HuafengWang] for the contribution!

> Better to return a Map rather than HashMap in getErasureCodingCodecs
> 
>
> Key: HDFS-12269
> URL: https://issues.apache.org/jira/browse/HDFS-12269
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: Huafeng Wang
>Assignee: Huafeng Wang
>Priority: Minor
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12269.001.patch, HDFS-12269.002.patch, 
> HDFS-12269.003.patch
>
>
> Currently the getErasureCodingCodecs function defined in ClientProtocal 
> returns a Hashmap:
> {code:java}
>   HashMap getErasureCodingCodecs() throws IOException;
> {code}
> It's better to return a Map.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12269) Better to return a Map rather than HashMap in getErasureCodingCodecs

2017-08-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129913#comment-16129913
 ] 

Hudson commented on HDFS-12269:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12200 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12200/])
HDFS-12269. Better to return a Map rather than HashMap in (aajisaka: rev 
08aaa4b36fab44c3f47878b3c487db3b373ffccf)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicies.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirErasureCodingOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/CodecRegistry.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/ECAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java


> Better to return a Map rather than HashMap in getErasureCodingCodecs
> 
>
> Key: HDFS-12269
> URL: https://issues.apache.org/jira/browse/HDFS-12269
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: Huafeng Wang
>Assignee: Huafeng Wang
>Priority: Minor
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12269.001.patch, HDFS-12269.002.patch, 
> HDFS-12269.003.patch
>
>
> Currently the getErasureCodingCodecs function defined in ClientProtocal 
> returns a Hashmap:
> {code:java}
>   HashMap getErasureCodingCodecs() throws IOException;
> {code}
> It's better to return a Map.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12313) Ozone: SCM: move container/pipeline StateMachine to the right package

2017-08-16 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-12313:
-

 Summary: Ozone: SCM: move container/pipeline StateMachine to the 
right package
 Key: HDFS-12313
 URL: https://issues.apache.org/jira/browse/HDFS-12313
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


HDFS-12305 added StateMachine for pipeline/container. However, the package 
location is incorrectly put under a new top-level package hadoop-hdfs-client. 
This was caused by my rename mistake before submit the patch.

This ticket is opened to move it to the right package under 
hadoop-hdfs-project/hadoop-hdfs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12313) Ozone: SCM: move container/pipeline StateMachine to the right package

2017-08-16 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12313:
--
Affects Version/s: HDFS-7240

> Ozone: SCM: move container/pipeline StateMachine to the right package
> -
>
> Key: HDFS-12313
> URL: https://issues.apache.org/jira/browse/HDFS-12313
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> HDFS-12305 added StateMachine for pipeline/container. However, the package 
> location is incorrectly put under a new top-level package hadoop-hdfs-client. 
> This was caused by my rename mistake before submit the patch.
> This ticket is opened to move it to the right package under 
> hadoop-hdfs-project/hadoop-hdfs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >