[jira] [Commented] (HDFS-10531) Add EC policy and storage policy related usage summarization function to dfs du command

2017-08-17 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131769#comment-16131769
 ] 

SammiChen commented on HDFS-10531:
--

Close this JIRA since the function is adressed by  HDFS-11646 and HDFS-11647

> Add EC policy and storage policy related usage summarization function to dfs 
> du command
> ---
>
> Key: HDFS-10531
> URL: https://issues.apache.org/jira/browse/HDFS-10531
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Rui Gao
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-10531.001.patch
>
>
> Currently du command output:
> {code}
> [ ~]$ hdfs dfs -du  -h /home/rgao/
> 0  /home/rgao/.Trash
> 0  /home/rgao/.staging
> 100 M  /home/rgao/ds
> 250 M  /home/rgao/ds-2
> 200 M  /home/rgao/noECBackup-ds
> 500 M  /home/rgao/noECBackup-ds-2
> {code}
> For hdfs users and administrators, EC policy and storage policy related usage 
> summarization would be very helpful when managing storages of cluster. The 
> imitate output of du could be like the following.
> {code}
> [ ~]$ hdfs dfs -du  -h -t( total, parameter to be added) /home/rgao
>  
> 0  /home/rgao/.Trash
> 0  /home/rgao/.staging
> [Archive] [EC:RS-DEFAULT-6-3-64k] 100 M  /home/rgao/ds
> [DISK] [EC:RS-DEFAULT-6-3-64k] 250 M  /home/rgao/ds-2
> [DISK] [Replica] 200 M  /home/rgao/noECBackup-ds
> [DISK] [Replica] 500 M  /home/rgao/noECBackup-ds-2
>  
> Total:
>  
> [Archive][EC:RS-DEFAULT-6-3-64k]  100 M
> [Archive][Replica]0 M
> [DISK] [EC:RS-DEFAULT-6-3-64k] 250 M
> [DISK] [Replica]   700 M  
>  
> [Archive][ALL] 100M
> [DISK][ALL]  950M
> [ALL] [EC:RS-DEFAULT-6-3-64k]350M
> [ALL] [Replica]  700M
> {code} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12258) ec -listPolicies should list all policies in system, no matter it's enabled or disabled

2017-08-17 Thread Wei Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhou updated HDFS-12258:

Attachment: HDFS-12258.04.patch

Thanks [~rakeshr] for review the patch and suggestion! Patch rebased with doc 
updated, thanks!

> ec -listPolicies should list all policies in system, no matter it's enabled 
> or disabled
> ---
>
> Key: HDFS-12258
> URL: https://issues.apache.org/jira/browse/HDFS-12258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: Wei Zhou
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12258.01.patch, HDFS-12258.02.patch, 
> HDFS-12258.03.patch, HDFS-12258.04.patch
>
>
> ec -listPolicies should list all policies in system, no matter it's enabled 
> or disabled



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10631) Federation State Store ZooKeeper implementation

2017-08-17 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-10631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-10631:
---
Attachment: HDFS-10631-HDFS-10467-010.patch

Triggering again build to check unit tests.

> Federation State Store ZooKeeper implementation
> ---
>
> Key: HDFS-10631
> URL: https://issues.apache.org/jira/browse/HDFS-10631
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10631-HDFS-10467-001.patch, 
> HDFS-10631-HDFS-10467-002.patch, HDFS-10631-HDFS-10467-003.patch, 
> HDFS-10631-HDFS-10467-004.patch, HDFS-10631-HDFS-10467-005.patch, 
> HDFS-10631-HDFS-10467-006.patch, HDFS-10631-HDFS-10467-007.patch, 
> HDFS-10631-HDFS-10467-008.patch, HDFS-10631-HDFS-10467-009.patch, 
> HDFS-10631-HDFS-10467-010.patch
>
>
> State Store implementation using ZooKeeper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131669#comment-16131669
 ] 

Hadoop QA commented on HDFS-11912:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11912 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882484/HDFS-11912.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bf330dd2c363 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 99e558b |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20749/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20749/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20749/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20749/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: 

[jira] [Updated] (HDFS-12317) HDFS metrics render error in the page of git respository

2017-08-17 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12317:
-
Affects Version/s: (was: 3.0.0-beta1)
   (was: 2.9.0)
   3.0.0-alpha4

> HDFS metrics render error in the page of git respository
> 
>
> Key: HDFS-12317
> URL: https://issues.apache.org/jira/browse/HDFS-12317
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, metrics
>Affects Versions: 3.0.0-alpha4
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: metrics-render-error.jpg
>
>
> Some HDFS metrics render error in the page of git respository. 
> The page link: 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12283) Ozone: DeleteKey-5: Implement SCM DeletedBlockLog

2017-08-17 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131661#comment-16131661
 ] 

Yuanbo Liu commented on HDFS-12283:
---

[~anu]/[~cheersyang] Thanks a lot for your comments. 
Except for the part you've discussed, here is my reply for Anu's questions.
{quote}
DeletedBlockLogImpl.java#commitTransactions: This is a hypothetical question. 
During the commitTransaction call...
{quote}
If one of the txids is invalid, that will cause commitTransactions to fail, and 
those txids will be added to retry queue again. I use batch operation here for 
efficiency reason, but your comment really hit me that we should commit txid 
one by one in case that one invalid txid make some other txids retry many times.
{quote}
addTransactions(Map blockMap)  in this call is there a 
size limit to the list of blocks in the argument. 
{quote}
I think the answer is yes, because this method is invoked when ksm sends 
deleting keys command to scm, and we will have such kind of limitation in ksm. 
This will be addressed in HDFS-12235.
Other comments make sense to me and I will address them in v4 patch. Thanks 
again for your kindly reviewing.

> Ozone: DeleteKey-5: Implement SCM DeletedBlockLog
> -
>
> Key: HDFS-12283
> URL: https://issues.apache.org/jira/browse/HDFS-12283
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
> Attachments: HDFS-12283.001.patch, HDFS-12283-HDFS-7240.001.patch, 
> HDFS-12283-HDFS-7240.002.patch, HDFS-12283-HDFS-7240.003.patch
>
>
> The DeletedBlockLog is a persisted log in SCM to keep tracking container 
> blocks which are under deletion. It maintains info about under-deletion 
> container blocks that notified by KSM, and the state how it is processed. We 
> can use RocksDB to implement the 1st version of the log, the schema looks like
> ||TxID||ContainerName||Block List||ProcessedCount||
> |0|c1|b1,b2,b3|0|
> |1|c2|b1|3|
> |2|c2|b2, b3|-1|
> Some explanations
> # TxID is an incremental long value transaction ID for ONE container and 
> multiple blocks
> # Container name is the name of the container
> # Block list is a list of block IDs
> # ProcessedCount is the number of times SCM has sent this record to datanode, 
> it represents the "state" of the transaction, it is in range of \[-1, 5\], -1 
> means the transaction eventually failed after some retries, 5 is the max 
> number times of retries.
> We need to define {{DeletedBlockLog}} as an interface and implement this with 
> RocksDB {{MetadataStore}} as the first version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12317) HDFS metrics render error in the page of git respository

2017-08-17 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12317:
-
Description: 
Some HDFS metrics render error in the page of git respository. 
The page link: 
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md

  was:
Some HDFS metrics render error in the page of git respository. 
The page link: 
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
Render error screenshot:

!metrics-render-error.jpg|thumbnail!


> HDFS metrics render error in the page of git respository
> 
>
> Key: HDFS-12317
> URL: https://issues.apache.org/jira/browse/HDFS-12317
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, metrics
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: metrics-render-error.jpg
>
>
> Some HDFS metrics render error in the page of git respository. 
> The page link: 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12317) HDFS metrics render error in the page of git respository

2017-08-17 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12317:
-
Description: 
Some HDFS metrics render error in the page of git respository. 
The page link: 
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
Render error screenshot:

!metrics-render-error.jpg|thumbnail!

  was:
Some HDFS metrics render error in the page of git respository. 
The page link: 
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
Render error screenshot:
!https://issues.apache.org/jira/secure/attachment/12882495/metrics-render-error.jpg|thumbnail!


> HDFS metrics render error in the page of git respository
> 
>
> Key: HDFS-12317
> URL: https://issues.apache.org/jira/browse/HDFS-12317
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, metrics
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: metrics-render-error.jpg
>
>
> Some HDFS metrics render error in the page of git respository. 
> The page link: 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
> Render error screenshot:
> !metrics-render-error.jpg|thumbnail!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12317) HDFS metrics render error in the page of git respository

2017-08-17 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12317:
-
Description: 
Some HDFS metrics render error in the page of git respository. 
The page link: 
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
Render error screenshot:
!https://issues.apache.org/jira/secure/attachment/12882495/metrics-render-error.jpg|thumbnail!

  was:
Some HDFS metrics render error in the page of git respository. 
The page link: 
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
Render error screenshot:
!attachment-name.jpg|thumbnail!


> HDFS metrics render error in the page of git respository
> 
>
> Key: HDFS-12317
> URL: https://issues.apache.org/jira/browse/HDFS-12317
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, metrics
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: metrics-render-error.jpg
>
>
> Some HDFS metrics render error in the page of git respository. 
> The page link: 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
> Render error screenshot:
> !https://issues.apache.org/jira/secure/attachment/12882495/metrics-render-error.jpg|thumbnail!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12317) HDFS metrics render error in the page of git respository

2017-08-17 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12317:
-
Description: 
Some HDFS metrics render error in the page of git respository. 
The page link: 
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
Render error screenshot:
!attachment-name.jpg|thumbnail!

  was:
Some HDFS metrics render error in the page of git respository. 
The page link: 
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md



> HDFS metrics render error in the page of git respository
> 
>
> Key: HDFS-12317
> URL: https://issues.apache.org/jira/browse/HDFS-12317
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, metrics
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: metrics-render-error.jpg
>
>
> Some HDFS metrics render error in the page of git respository. 
> The page link: 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
> Render error screenshot:
> !attachment-name.jpg|thumbnail!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12317) HDFS metrics render error in the page of git respository

2017-08-17 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12317:
-
Attachment: metrics-render-error.jpg

> HDFS metrics render error in the page of git respository
> 
>
> Key: HDFS-12317
> URL: https://issues.apache.org/jira/browse/HDFS-12317
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, metrics
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: metrics-render-error.jpg
>
>
> Some HDFS metrics render error in the page of git respository. 
> The page link: 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12317) HDFS metrics render error in the page of git respository

2017-08-17 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-12317:


 Summary: HDFS metrics render error in the page of git respository
 Key: HDFS-12317
 URL: https://issues.apache.org/jira/browse/HDFS-12317
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation, metrics
Affects Versions: 2.9.0, 3.0.0-beta1
Reporter: Yiqun Lin
Assignee: Yiqun Lin
Priority: Minor


Some HDFS metrics render error in the page of git respository. 
The page link: 
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12283) Ozone: DeleteKey-5: Implement SCM DeletedBlockLog

2017-08-17 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131633#comment-16131633
 ] 

Weiwei Yang commented on HDFS-12283:


Hi [~anu]

Thanks for helping to review this. Some of my comments

bq.  #LATEST_TXID# – Do we need this since this table will have nothing but 
TXIDs , do we need to keep this under this specific key

I was suggesting to add this to avoid some bad case. Lets say there initially 
following entries in the table

||TXID||ContainerBlocks||
|1|xxx|
|2|xxx|
|3|xxx|
|4|xxx|
|5|xxx|
|6|xxx|

at some point, txid 5 and 6 got committed. Now it turns to be

||TXID||ContainerBlocks||
|1|xxx|
|2|xxx|
|3|xxx|
|4|xxx|

then SCM got restarted, it seeks to the end and will return 4. So next TXID 
will be generated as 5 again... this should be fine as a commit means the TXIDs 
are successfully handled at datanodes. But this also means TXID is 
*non-unique*, this might cause problems at some places such as auditing ? 

bq. DeleteBlockLogImpl.java#incrementCount Would like to understand what 
happens after we set this to -1

We may need to add more doc about the {{count}}. This var is actually a simpler 
form of the {{state}}. Maybe {{TimesBeingProcessed}} is a better word. {{-1}} 
means that we give up retrying on this TX as it has failed enough times. Do you 
really want to keep retrying without a limit ?

Thanks

> Ozone: DeleteKey-5: Implement SCM DeletedBlockLog
> -
>
> Key: HDFS-12283
> URL: https://issues.apache.org/jira/browse/HDFS-12283
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
> Attachments: HDFS-12283.001.patch, HDFS-12283-HDFS-7240.001.patch, 
> HDFS-12283-HDFS-7240.002.patch, HDFS-12283-HDFS-7240.003.patch
>
>
> The DeletedBlockLog is a persisted log in SCM to keep tracking container 
> blocks which are under deletion. It maintains info about under-deletion 
> container blocks that notified by KSM, and the state how it is processed. We 
> can use RocksDB to implement the 1st version of the log, the schema looks like
> ||TxID||ContainerName||Block List||ProcessedCount||
> |0|c1|b1,b2,b3|0|
> |1|c2|b1|3|
> |2|c2|b2, b3|-1|
> Some explanations
> # TxID is an incremental long value transaction ID for ONE container and 
> multiple blocks
> # Container name is the name of the container
> # Block list is a list of block IDs
> # ProcessedCount is the number of times SCM has sent this record to datanode, 
> it represents the "state" of the transaction, it is in range of \[-1, 5\], -1 
> means the transaction eventually failed after some retries, 5 is the max 
> number times of retries.
> We need to define {{DeletedBlockLog}} as an interface and implement this with 
> RocksDB {{MetadataStore}} as the first version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12283) Ozone: DeleteKey-5: Implement SCM DeletedBlockLog

2017-08-17 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131654#comment-16131654
 ] 

Anu Engineer commented on HDFS-12283:
-

bq.We will need some tool to fix corrupted blocks in feature, in case datanode 
could not remove them
perfect, let us do that and file a follow-up JIRA to have this option in SCMCLI.



> Ozone: DeleteKey-5: Implement SCM DeletedBlockLog
> -
>
> Key: HDFS-12283
> URL: https://issues.apache.org/jira/browse/HDFS-12283
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
> Attachments: HDFS-12283.001.patch, HDFS-12283-HDFS-7240.001.patch, 
> HDFS-12283-HDFS-7240.002.patch, HDFS-12283-HDFS-7240.003.patch
>
>
> The DeletedBlockLog is a persisted log in SCM to keep tracking container 
> blocks which are under deletion. It maintains info about under-deletion 
> container blocks that notified by KSM, and the state how it is processed. We 
> can use RocksDB to implement the 1st version of the log, the schema looks like
> ||TxID||ContainerName||Block List||ProcessedCount||
> |0|c1|b1,b2,b3|0|
> |1|c2|b1|3|
> |2|c2|b2, b3|-1|
> Some explanations
> # TxID is an incremental long value transaction ID for ONE container and 
> multiple blocks
> # Container name is the name of the container
> # Block list is a list of block IDs
> # ProcessedCount is the number of times SCM has sent this record to datanode, 
> it represents the "state" of the transaction, it is in range of \[-1, 5\], -1 
> means the transaction eventually failed after some retries, 5 is the max 
> number times of retries.
> We need to define {{DeletedBlockLog}} as an interface and implement this with 
> RocksDB {{MetadataStore}} as the first version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12283) Ozone: DeleteKey-5: Implement SCM DeletedBlockLog

2017-08-17 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131642#comment-16131642
 ] 

Anu Engineer commented on HDFS-12283:
-

bq. But this also means TXID is non-unique, this might cause problems at some 
places such as auditing ?
[~cheersyang] I did not think about it, thanks for pointing it out.  I agree 
let us keep the LastTXID in a separate key. This is similar to what HDFS does, 
where we write this down in a separate file.

bq. Do you really want to keep retrying without a limit ?
At least for the short term, yes. Let us say we take the option of going with 
-1, what happens when a delete fails, I don't think we have come up with a good 
strategy to handle this case yet. We can only fail once we define what happens 
after the failure of delete.


> Ozone: DeleteKey-5: Implement SCM DeletedBlockLog
> -
>
> Key: HDFS-12283
> URL: https://issues.apache.org/jira/browse/HDFS-12283
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
> Attachments: HDFS-12283.001.patch, HDFS-12283-HDFS-7240.001.patch, 
> HDFS-12283-HDFS-7240.002.patch, HDFS-12283-HDFS-7240.003.patch
>
>
> The DeletedBlockLog is a persisted log in SCM to keep tracking container 
> blocks which are under deletion. It maintains info about under-deletion 
> container blocks that notified by KSM, and the state how it is processed. We 
> can use RocksDB to implement the 1st version of the log, the schema looks like
> ||TxID||ContainerName||Block List||ProcessedCount||
> |0|c1|b1,b2,b3|0|
> |1|c2|b1|3|
> |2|c2|b2, b3|-1|
> Some explanations
> # TxID is an incremental long value transaction ID for ONE container and 
> multiple blocks
> # Container name is the name of the container
> # Block list is a list of block IDs
> # ProcessedCount is the number of times SCM has sent this record to datanode, 
> it represents the "state" of the transaction, it is in range of \[-1, 5\], -1 
> means the transaction eventually failed after some retries, 5 is the max 
> number times of retries.
> We need to define {{DeletedBlockLog}} as an interface and implement this with 
> RocksDB {{MetadataStore}} as the first version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12283) Ozone: DeleteKey-5: Implement SCM DeletedBlockLog

2017-08-17 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131652#comment-16131652
 ] 

Weiwei Yang commented on HDFS-12283:


Hi [~anu]

bq. At least for the short term, yes. Let us say we take the option of going 
with -1, what happens when a delete fails, I don't think we have come up with a 
good strategy to handle this case yet. 

We will need some tool to fix *corrupted* blocks in feature, in case datanode 
could not remove them (e.g in case of metadata broken), some sort of purge 
command to remove those stuff forcibly. I agree to have a big retry limit for 
now. That gives us at least the choice to dump the DB and see how many dirty 
entries are there, so admin could manually fix up things.

> Ozone: DeleteKey-5: Implement SCM DeletedBlockLog
> -
>
> Key: HDFS-12283
> URL: https://issues.apache.org/jira/browse/HDFS-12283
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
> Attachments: HDFS-12283.001.patch, HDFS-12283-HDFS-7240.001.patch, 
> HDFS-12283-HDFS-7240.002.patch, HDFS-12283-HDFS-7240.003.patch
>
>
> The DeletedBlockLog is a persisted log in SCM to keep tracking container 
> blocks which are under deletion. It maintains info about under-deletion 
> container blocks that notified by KSM, and the state how it is processed. We 
> can use RocksDB to implement the 1st version of the log, the schema looks like
> ||TxID||ContainerName||Block List||ProcessedCount||
> |0|c1|b1,b2,b3|0|
> |1|c2|b1|3|
> |2|c2|b2, b3|-1|
> Some explanations
> # TxID is an incremental long value transaction ID for ONE container and 
> multiple blocks
> # Container name is the name of the container
> # Block list is a list of block IDs
> # ProcessedCount is the number of times SCM has sent this record to datanode, 
> it represents the "state" of the transaction, it is in range of \[-1, 5\], -1 
> means the transaction eventually failed after some retries, 5 is the max 
> number times of retries.
> We need to define {{DeletedBlockLog}} as an interface and implement this with 
> RocksDB {{MetadataStore}} as the first version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12283) Ozone: DeleteKey-5: Implement SCM DeletedBlockLog

2017-08-17 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131642#comment-16131642
 ] 

Anu Engineer edited comment on HDFS-12283 at 8/18/17 2:52 AM:
--

bq. But this also means TXID is non-unique, this might cause problems at some 
places such as auditing ?
[~cheersyang] I did not think about it, thanks for pointing it out.  I agree 
let us keep the LastTXID in a separate key. This is similar to what HDFS does, 
where we write this down in a separate file.

bq. Do you really want to keep retrying without a limit ?
At least for the short term, yes. Let us say we take the option of going with 
-1, what happens when a delete fails, I don't think we have come up with a good 
strategy to handle this case yet. We can only fail once we define what happens 
after the failure of delete.

Or make this retry really big for time being, say 4096 retries with 5 min 
intervals in between, that is a 2 weeks window before we fail.



was (Author: anu):
bq. But this also means TXID is non-unique, this might cause problems at some 
places such as auditing ?
[~cheersyang] I did not think about it, thanks for pointing it out.  I agree 
let us keep the LastTXID in a separate key. This is similar to what HDFS does, 
where we write this down in a separate file.

bq. Do you really want to keep retrying without a limit ?
At least for the short term, yes. Let us say we take the option of going with 
-1, what happens when a delete fails, I don't think we have come up with a good 
strategy to handle this case yet. We can only fail once we define what happens 
after the failure of delete.


> Ozone: DeleteKey-5: Implement SCM DeletedBlockLog
> -
>
> Key: HDFS-12283
> URL: https://issues.apache.org/jira/browse/HDFS-12283
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
> Attachments: HDFS-12283.001.patch, HDFS-12283-HDFS-7240.001.patch, 
> HDFS-12283-HDFS-7240.002.patch, HDFS-12283-HDFS-7240.003.patch
>
>
> The DeletedBlockLog is a persisted log in SCM to keep tracking container 
> blocks which are under deletion. It maintains info about under-deletion 
> container blocks that notified by KSM, and the state how it is processed. We 
> can use RocksDB to implement the 1st version of the log, the schema looks like
> ||TxID||ContainerName||Block List||ProcessedCount||
> |0|c1|b1,b2,b3|0|
> |1|c2|b1|3|
> |2|c2|b2, b3|-1|
> Some explanations
> # TxID is an incremental long value transaction ID for ONE container and 
> multiple blocks
> # Container name is the name of the container
> # Block list is a list of block IDs
> # ProcessedCount is the number of times SCM has sent this record to datanode, 
> it represents the "state" of the transaction, it is in range of \[-1, 5\], -1 
> means the transaction eventually failed after some retries, 5 is the max 
> number times of retries.
> We need to define {{DeletedBlockLog}} as an interface and implement this with 
> RocksDB {{MetadataStore}} as the first version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12039) Ozone: Implement update volume owner in ozone shell

2017-08-17 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang resolved HDFS-12039.

   Resolution: Fixed
Fix Version/s: HDFS-7240

> Ozone: Implement update volume owner in ozone shell
> ---
>
> Key: HDFS-12039
> URL: https://issues.apache.org/jira/browse/HDFS-12039
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Lokesh Jain
> Fix For: HDFS-7240
>
>
> Ozone shell command {{updateVolume}} should support to update the owner of a 
> volume, using following syntax
> {code}
> hdfs oz -updateVolume http://ozone1.fyre.ibm.com:9864/volume-wwei-0 -owner 
> xyz -root
> {code}
> this could work from rest api, following command could change the volume 
> owner to {{www}}
> {code}
> curl -X PUT -H "Date: Mon, 26 Jun 2017 04:23:30 GMT" -H "x-ozone-version: v1" 
> -H "x-ozone-user:www" -H "Authorization:OZONE root" 
> http://ozone1.fyre.ibm.com:9864/volume-wwei-0
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12039) Ozone: Implement update volume owner in ozone shell

2017-08-17 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131648#comment-16131648
 ] 

Weiwei Yang commented on HDFS-12039:


Thanks [~ljain] to confirm this is working, I think this is fixed by HDFS-12118 
by modifying the argument from -owner to -user. Lets close it as fixed 
upstream. Thank you.

> Ozone: Implement update volume owner in ozone shell
> ---
>
> Key: HDFS-12039
> URL: https://issues.apache.org/jira/browse/HDFS-12039
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Lokesh Jain
>
> Ozone shell command {{updateVolume}} should support to update the owner of a 
> volume, using following syntax
> {code}
> hdfs oz -updateVolume http://ozone1.fyre.ibm.com:9864/volume-wwei-0 -owner 
> xyz -root
> {code}
> this could work from rest api, following command could change the volume 
> owner to {{www}}
> {code}
> curl -X PUT -H "Date: Mon, 26 Jun 2017 04:23:30 GMT" -H "x-ozone-version: v1" 
> -H "x-ozone-user:www" -H "Authorization:OZONE root" 
> http://ozone1.fyre.ibm.com:9864/volume-wwei-0
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12159) Ozone: SCM: Add create replication pipeline RPC

2017-08-17 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12159:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

[~xyao] and [~szetszwo] Thanks for the reviews. I have committed this patch to 
the feature branch.

> Ozone: SCM: Add create replication pipeline RPC
> ---
>
> Key: HDFS-12159
> URL: https://issues.apache.org/jira/browse/HDFS-12159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-12159-HDFS-7240.001.patch, 
> HDFS-12159-HDFS-7240.002.patch, HDFS-12159-HDFS-7240.003.patch, 
> HDFS-12159-HDFS-7240.004.patch, HDFS-12159-HDFS-7240.005.patch, 
> HDFS-12159-HDFS-7240.006.patch
>
>
> Add an API that allows users to create replication pipelines using SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-5040) Audit log for admin commands/ logging output of all DFS admin commands

2017-08-17 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131609#comment-16131609
 ] 

Brahma Reddy Battula commented on HDFS-5040:


[~kshukla] Nice and very neat work here. Appreciate it!. Latest patch almost 
good to me. apart following minor nits from test.
  
1) Instead of following in  {{TestAuditLogger#testWebHdfsAuditLogger}}, can you 
reset the log count like {{testAuditLoggerWithSetPermission}}..?

{code}
170   assertEquals("getfileinfo", DummyAuditLogger.lastCommand);
171   int logCount = DummyAuditLogger.logCount;
{code}
Just add this
{code}
 cluster.waitClusterUp();
  DummyAuditLogger.resetLogCount();
{code}
2) {{TestNameNodeMXBean}} can we handle like this..?
{code}
if (opType.equals(TopConf.ALL_CMDS)) {
expected = 2 * NUM_OPS + 2;
  } else if (opType.equals("datanodeReport")) {
expected = 2;
  } else {
{code}

Would any one else like to chime in? Thanks

> Audit log for admin commands/ logging output of all DFS admin commands
> --
>
> Key: HDFS-5040
> URL: https://issues.apache.org/jira/browse/HDFS-5040
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Raghu C Doppalapudi
>Assignee: Kuhu Shukla
>  Labels: BB2015-05-TBR
> Attachments: HDFS-5040.001.patch, HDFS-5040.004.patch, 
> HDFS-5040.005.patch, HDFS-5040.006.patch, HDFS-5040.007.patch, 
> HDFS-5040.patch, HDFS-5040.patch, HDFS-5040.patch
>
>
> enable audit log for all the admin commands/also provide ability to log all 
> the admin commands in separate log file, at this point all the logging is 
> displayed on the console.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10631) Federation State Store ZooKeeper implementation

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131605#comment-16131605
 ] 

Hadoop QA commented on HDFS-10631:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10467 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
38s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
56s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} HDFS-10467 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}143m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}192m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.fs.viewfs.TestViewFileSystemHdfs |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.namenode.TestNNThroughputBenchmark |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-10631 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882458/HDFS-10631-HDFS-10467-009.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 0b4e86c37276 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10467 / 

[jira] [Commented] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-08-17 Thread George Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131579#comment-16131579
 ] 

George Huang commented on HDFS-11912:
-

Reducing the number of test files to be created, ran test locally multiple time 
all passed. Also fixed couple checkstyle issues.

Many thanks,
George

> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Assignee: George Huang
>Priority: Minor
>  Labels: TestGap
> Attachments: HDFS-11912.001.patch, HDFS-11912.002.patch, 
> HDFS-11912.003.patch, HDFS-11912.004.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-08-17 Thread George Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

George Huang updated HDFS-11912:

Attachment: HDFS-11912.005.patch

> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Assignee: George Huang
>Priority: Minor
>  Labels: TestGap
> Attachments: HDFS-11912.001.patch, HDFS-11912.002.patch, 
> HDFS-11912.003.patch, HDFS-11912.004.patch, HDFS-11912.005.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12313) Ozone: SCM: move container/pipeline StateMachine to the right package

2017-08-17 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12313:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: HDFS-7240
Target Version/s: HDFS-7240
  Status: Resolved  (was: Patch Available)

Thanks [~anu] for the review. I've commit the patch to the feature branch. 

> Ozone: SCM: move container/pipeline StateMachine to the right package
> -
>
> Key: HDFS-12313
> URL: https://issues.apache.org/jira/browse/HDFS-12313
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: HDFS-7240
>
> Attachments: HDFS-12313-HDFS-7240.001.patch, 
> HDFS-12313-HDFS-7240.002.patch, HDFS-12313-HDFS-7240.003.patch, 
> HDFS-12313-HDFS-7240.004.patch
>
>
> HDFS-12305 added StateMachine for pipeline/container. However, the package 
> location is incorrectly put under a new top-level package hadoop-hdfs-client. 
> This was caused by my rename mistake before submit the patch.
> This ticket is opened to move it to the right package under 
> hadoop-hdfs-project/hadoop-hdfs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12316) Verify HDFS snapshot deletion doesn't crash the ongoing file writes

2017-08-17 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-12316:
--
   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

Jenkins build https://builds.apache.org/job/Hadoop-trunk-Commit/12208/ passed.

Thanks [~yzhangal] for the review. Committed the patch to trunk and branch-2.

> Verify HDFS snapshot deletion doesn't crash the ongoing file writes
> ---
>
> Key: HDFS-12316
> URL: https://issues.apache.org/jira/browse/HDFS-12316
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12316.01.patch, HDFS-12316.02.patch
>
>
> Recently we encountered a case where deletion of HDFS snapshots crashed the 
> client that is currently writing to a file under the same snap root. This 
> open file was previously captured in the snapshot using the immutable open 
> file in snapshot feature "dfs.namenode.snapshot.capture.openfiles".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12201) INode#getSnapshotINode() should get INodeAttributes from INodeAttributesProvider for the current INode

2017-08-17 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy resolved HDFS-12201.
---
   Resolution: Not A Bug
Fix Version/s: 3.0.0-beta1

[~daryn],
  Thanks for the comments. Agreed, edit log and the fsimage could end up 
persisted with external attributes. FSDirectory seems to be right level to 
aggregate or transform the results with external attributes so as to keep the 
abstraction clean. Closing the bug.

> INode#getSnapshotINode() should get INodeAttributes from 
> INodeAttributesProvider for the current INode
> --
>
> Key: HDFS-12201
> URL: https://issues.apache.org/jira/browse/HDFS-12201
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 2.8.0
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12201.test.01.patch
>
>
> Problem: When an external INodeAttributesProvider is enabled, SnapshotDiff is 
> not detecting changes in files when the external ACL/XAttr attributes change. 
> {{FileWithSnapshotFeature#changedBetweenSnapshots()}} when trying to detect 
> changes in snapshots for the given file, does meta data comparison which 
> takes in the attributes retrieved from {{INode#getSnapshotINode()}}
> {{INodeFile}}
> {noformat}
>   @Override
>   public INodeFileAttributes getSnapshotINode(final int snapshotId) {
> FileWithSnapshotFeature sf = this.getFileWithSnapshotFeature();
> if (sf != null) {
>   return sf.getDiffs().getSnapshotINode(snapshotId, this);
> } else {
>   return this;
> }
>   }
> {noformat}
> {{AbstractINodeDiffList#getSnapshotINode}}
> {noformat}
>   public A getSnapshotINode(final int snapshotId, final A currentINode) {
> final D diff = getDiffById(snapshotId);
> final A inode = diff == null? null: diff.getSnapshotINode();
> return inode == null? currentINode: inode;
>   }
> {noformat}
> But, INodeFile, INodeDirectory #getSnapshotINode() returns the current 
> INode's local INodeAttributes if there is anything available for the given 
> snapshot id. When there is an INodeAttributesProvider configured, attributes 
> provided by the external provider could be different from the local. But, 
> getSnapshotINode() always returns the local attributes without retrieving 
> them from attributes provider. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11225) NameNode crashed because deleteSnapshot held FSNamesystem lock too long

2017-08-17 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131518#comment-16131518
 ] 

Andrew Wang commented on HDFS-11225:


Hi Manoj, is this still targeted at beta1? Long time since the last update.

> NameNode crashed because deleteSnapshot held FSNamesystem lock too long
> ---
>
> Key: HDFS-11225
> URL: https://issues.apache.org/jira/browse/HDFS-11225
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.4.0
> Environment: CDH5.8.2, HA
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Critical
>  Labels: high-availability
>
> The deleteSnapshot operation is synchronous. In certain situations this 
> operation may hold FSNamesystem lock for too long, bringing almost every 
> NameNode operation to a halt.
> We have observed one incidence where it took so long that ZKFC believes the 
> NameNode is down. All other IPC threads were waiting to acquire FSNamesystem 
> lock. This specific deleteSnapshot took ~70 seconds. ZKFC has connection 
> timeout of 45 seconds by default, and if all IPC threads wait for 
> FSNamesystem lock and can't accept new incoming connection, ZKFC times out, 
> advances epoch and NameNode will therefore lose its active NN role and then 
> fail.
> Relevant log:
> {noformat}
> Thread 154 (IPC Server handler 86 on 8020):
>   State: RUNNABLE
>   Blocked count: 2753455
>   Waited count: 89201773
>   Stack:
> 
> org.apache.hadoop.hdfs.server.namenode.INode$BlocksMapUpdateInfo.addDeleteBlock(INode.java:879)
> 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.destroyAndCollectBlocks(INodeFile.java:508)
> 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.destroyAndCollectBlocks(INodeDirectory.java:763)
> 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.destroyAndCollectBlocks(INodeDirectory.java:763)
> 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.destroyAndCollectBlocks(INodeDirectory.java:763)
> 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.destroyAndCollectBlocks(INodeDirectory.java:763)
> 
> org.apache.hadoop.hdfs.server.namenode.INodeReference.destroyAndCollectBlocks(INodeReference.java:339)
> 
> org.apache.hadoop.hdfs.server.namenode.INodeReference$WithName.destroyAndCollectBlocks(INodeReference.java:606)
> 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature$ChildrenDiff.destroyDeletedList(DirectoryWithSnapshotFeature.java:119)
> 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature$ChildrenDiff.access$400(DirectoryWithSnapshotFeature.java:61)
> 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature$DirectoryDiff.destroyDiffAndCollectBlocks(DirectoryWithSnapshotFeature.java:319)
> 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature$DirectoryDiff.destroyDiffAndCollectBlocks(DirectoryWithSnapshotFeature.java:167)
> 
> org.apache.hadoop.hdfs.server.namenode.snapshot.AbstractINodeDiffList.deleteSnapshotDiff(AbstractINodeDiffList.java:83)
> 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature.cleanDirectory(DirectoryWithSnapshotFeature.java:745)
> 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtree(INodeDirectory.java:776)
> 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtreeRecursively(INodeDirectory.java:747)
> 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature.cleanDirectory(DirectoryWithSnapshotFeature.java:747)
> 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtree(INodeDirectory.java:776)
> 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtreeRecursively(INodeDirectory.java:747)
> 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtree(INodeDirectory.java:789)
> {noformat}
> After the ZKFC determined NameNode was down and advanced epoch, the NN 
> finished deleting snapshot, and sent the edit to journal nodes, but it was 
> rejected because epoch was updated. See the following stacktrace:
> {noformat}
> 10.0.16.21:8485: IPC's epoch 17 is less than the last promised epoch 18
> at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:429)
> at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:457)
> at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:352)
> at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:149)
> at 
> 

[jira] [Commented] (HDFS-9822) Erasure Coding: Avoids scheduling multiple reconstruction tasks for a striped block at the same time

2017-08-17 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131507#comment-16131507
 ] 

SammiChen commented on HDFS-9822:
-

Hi [~andrew.wang],  so far I haven't reproduced the issue yet. I will try to 
see if it can meet the beta1 timeline. 

> Erasure Coding: Avoids scheduling multiple reconstruction tasks for a striped 
> block at the same time
> 
>
> Key: HDFS-9822
> URL: https://issues.apache.org/jira/browse/HDFS-9822
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Tsz Wo Nicholas Sze
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-9822-001.patch, HDFS-9822-002.patch
>
>
> Found the following AssertionError in 
> https://builds.apache.org/job/PreCommit-HDFS-Build/14501/testReport/org.apache.hadoop.hdfs.server.namenode/TestReconstructStripedBlocks/testMissingStripedBlockWithBusyNode2/
> {code}
> AssertionError: Should wait the previous reconstruction to finish
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.validateReconstructionWork(BlockManager.java:1680)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReconstructionWorkForBlocks(BlockManager.java:1536)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeBlockReconstructionWork(BlockManager.java:1472)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4229)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4100)
>   at java.lang.Thread.run(Thread.java:745)
>   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
>   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:170)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4119)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11082) Provide replicated EC policy to replicate files

2017-08-17 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131503#comment-16131503
 ] 

SammiChen commented on HDFS-11082:
--

Thanks [~andrew.wang] for review and commit the patch!

> Provide replicated EC policy to replicate files
> ---
>
> Key: HDFS-11082
> URL: https://issues.apache.org/jira/browse/HDFS-11082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: SammiChen
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-11082.001.patch, HDFS-11082.002.patch, 
> HDFS-11082.003.patch, HDFS-11082.004.patch
>
>
> The idea of this jira is to provide a new {{replicated EC policy}} so that we 
> can override the EC policy on a parent directory and go back to just 
> replicating the files based on replication factors.
> Thanks [~andrew.wang] for the 
> [discussions|https://issues.apache.org/jira/browse/HDFS-11072?focusedCommentId=15620743=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15620743].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12316) Verify HDFS snapshot deletion doesn't crash the ongoing file writes

2017-08-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131502#comment-16131502
 ] 

Hudson commented on HDFS-12316:
---

ABORTED: Integrated in Jenkins build Hadoop-trunk-Commit #12207 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12207/])
HDFS-12316. Verify HDFS snapshot deletion doesn't crash the ongoing file 
(manojpec: rev 4230872dd66d748172903b1522885b03f34bbf9b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestOpenFilesWithSnapshot.java


> Verify HDFS snapshot deletion doesn't crash the ongoing file writes
> ---
>
> Key: HDFS-12316
> URL: https://issues.apache.org/jira/browse/HDFS-12316
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-12316.01.patch, HDFS-12316.02.patch
>
>
> Recently we encountered a case where deletion of HDFS snapshots crashed the 
> client that is currently writing to a file under the same snap root. This 
> open file was previously captured in the snapshot using the immutable open 
> file in snapshot feature "dfs.namenode.snapshot.capture.openfiles".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12293) DataNode should log file name on disk error

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131454#comment-16131454
 ] 

Hadoop QA commented on HDFS-12293:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12293 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882434/HDFS-12293.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8feff599c05f 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ab1a8ae |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20747/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20747/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20747/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DataNode should log file name on disk error
> ---
>
> Key: HDFS-12293
> URL: https://issues.apache.org/jira/browse/HDFS-12293
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: 

[jira] [Commented] (HDFS-12316) Verify HDFS snapshot deletion doesn't crash the ongoing file writes

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131450#comment-16131450
 ] 

Hadoop QA commented on HDFS-12316:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12316 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882433/HDFS-12316.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fe1c0d96c63d 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ab1a8ae |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20746/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20746/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20746/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message 

[jira] [Commented] (HDFS-12072) Provide fairness between EC and non-EC recovery tasks.

2017-08-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131446#comment-16131446
 ] 

Hudson commented on HDFS-12072:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12206 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12206/])
HDFS-12072. Provide fairness between EC and non-EC recovery tasks. (wang: rev 
b29894889742dda654cd88a7ce72a4e51fccb328)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java


> Provide fairness between EC and non-EC recovery tasks.
> --
>
> Key: HDFS-12072
> URL: https://issues.apache.org/jira/browse/HDFS-12072
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12072.00.patch, HDFS-12072.01.patch
>
>
> In {{DatanodeManager#handleHeartbeat}}, it takes up to {{maxTransfer}} 
> reconstruction tasks for non-EC, then if the request can not be full filled, 
> it takes more tasks from EC reconstruction tasks.
> {code}
> List pendingList = nodeinfo.getReplicationCommand(
> maxTransfers);
> if (pendingList != null) {
>   cmds.add(new BlockCommand(DatanodeProtocol.DNA_TRANSFER, blockPoolId,
>   pendingList));
>   maxTransfers -= pendingList.size();
> }
> // check pending erasure coding tasks
> List pendingECList = nodeinfo
> .getErasureCodeCommand(maxTransfers);
> if (pendingECList != null) {
>   cmds.add(new BlockECReconstructionCommand(
>   DNA_ERASURE_CODING_RECONSTRUCTION, pendingECList));
> }
> {code}
> So on a large cluster, if there are large number of constantly non-EC 
> reconstruction tasks, EC reconstruction tasks do not have a chance to run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10631) Federation State Store ZooKeeper implementation

2017-08-17 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-10631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-10631:
---
Attachment: HDFS-10631-HDFS-10467-009.patch

# Fixed a couple more comments from [~subru].
# Rerun {{TestRouterRpc}}, it doesn't fail in my local machine.

> Federation State Store ZooKeeper implementation
> ---
>
> Key: HDFS-10631
> URL: https://issues.apache.org/jira/browse/HDFS-10631
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10631-HDFS-10467-001.patch, 
> HDFS-10631-HDFS-10467-002.patch, HDFS-10631-HDFS-10467-003.patch, 
> HDFS-10631-HDFS-10467-004.patch, HDFS-10631-HDFS-10467-005.patch, 
> HDFS-10631-HDFS-10467-006.patch, HDFS-10631-HDFS-10467-007.patch, 
> HDFS-10631-HDFS-10467-008.patch, HDFS-10631-HDFS-10467-009.patch
>
>
> State Store implementation using ZooKeeper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12072) Provide fairness between EC and non-EC recovery tasks.

2017-08-17 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131414#comment-16131414
 ] 

Lei (Eddy) Xu commented on HDFS-12072:
--

Thanks [~andrew.wang]!

> Provide fairness between EC and non-EC recovery tasks.
> --
>
> Key: HDFS-12072
> URL: https://issues.apache.org/jira/browse/HDFS-12072
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12072.00.patch, HDFS-12072.01.patch
>
>
> In {{DatanodeManager#handleHeartbeat}}, it takes up to {{maxTransfer}} 
> reconstruction tasks for non-EC, then if the request can not be full filled, 
> it takes more tasks from EC reconstruction tasks.
> {code}
> List pendingList = nodeinfo.getReplicationCommand(
> maxTransfers);
> if (pendingList != null) {
>   cmds.add(new BlockCommand(DatanodeProtocol.DNA_TRANSFER, blockPoolId,
>   pendingList));
>   maxTransfers -= pendingList.size();
> }
> // check pending erasure coding tasks
> List pendingECList = nodeinfo
> .getErasureCodeCommand(maxTransfers);
> if (pendingECList != null) {
>   cmds.add(new BlockECReconstructionCommand(
>   DNA_ERASURE_CODING_RECONSTRUCTION, pendingECList));
> }
> {code}
> So on a large cluster, if there are large number of constantly non-EC 
> reconstruction tasks, EC reconstruction tasks do not have a chance to run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12072) Provide fairness between EC and non-EC recovery tasks.

2017-08-17 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12072:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

Thanks for the contribution Eddy, committed this to trunk!

> Provide fairness between EC and non-EC recovery tasks.
> --
>
> Key: HDFS-12072
> URL: https://issues.apache.org/jira/browse/HDFS-12072
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12072.00.patch, HDFS-12072.01.patch
>
>
> In {{DatanodeManager#handleHeartbeat}}, it takes up to {{maxTransfer}} 
> reconstruction tasks for non-EC, then if the request can not be full filled, 
> it takes more tasks from EC reconstruction tasks.
> {code}
> List pendingList = nodeinfo.getReplicationCommand(
> maxTransfers);
> if (pendingList != null) {
>   cmds.add(new BlockCommand(DatanodeProtocol.DNA_TRANSFER, blockPoolId,
>   pendingList));
>   maxTransfers -= pendingList.size();
> }
> // check pending erasure coding tasks
> List pendingECList = nodeinfo
> .getErasureCodeCommand(maxTransfers);
> if (pendingECList != null) {
>   cmds.add(new BlockECReconstructionCommand(
>   DNA_ERASURE_CODING_RECONSTRUCTION, pendingECList));
> }
> {code}
> So on a large cluster, if there are large number of constantly non-EC 
> reconstruction tasks, EC reconstruction tasks do not have a chance to run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12072) Provide fairness between EC and non-EC recovery tasks.

2017-08-17 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131393#comment-16131393
 ] 

Andrew Wang commented on HDFS-12072:


LGTM +1, thanks for working on this Eddy, will commit shortly.

> Provide fairness between EC and non-EC recovery tasks.
> --
>
> Key: HDFS-12072
> URL: https://issues.apache.org/jira/browse/HDFS-12072
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12072.00.patch, HDFS-12072.01.patch
>
>
> In {{DatanodeManager#handleHeartbeat}}, it takes up to {{maxTransfer}} 
> reconstruction tasks for non-EC, then if the request can not be full filled, 
> it takes more tasks from EC reconstruction tasks.
> {code}
> List pendingList = nodeinfo.getReplicationCommand(
> maxTransfers);
> if (pendingList != null) {
>   cmds.add(new BlockCommand(DatanodeProtocol.DNA_TRANSFER, blockPoolId,
>   pendingList));
>   maxTransfers -= pendingList.size();
> }
> // check pending erasure coding tasks
> List pendingECList = nodeinfo
> .getErasureCodeCommand(maxTransfers);
> if (pendingECList != null) {
>   cmds.add(new BlockECReconstructionCommand(
>   DNA_ERASURE_CODING_RECONSTRUCTION, pendingECList));
> }
> {code}
> So on a large cluster, if there are large number of constantly non-EC 
> reconstruction tasks, EC reconstruction tasks do not have a chance to run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12316) Verify HDFS snapshot deletion doesn't crash the ongoing file writes

2017-08-17 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131370#comment-16131370
 ] 

Manoj Govindassamy commented on HDFS-12316:
---

Above test failures are not related to the patch. 

> Verify HDFS snapshot deletion doesn't crash the ongoing file writes
> ---
>
> Key: HDFS-12316
> URL: https://issues.apache.org/jira/browse/HDFS-12316
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-12316.01.patch, HDFS-12316.02.patch
>
>
> Recently we encountered a case where deletion of HDFS snapshots crashed the 
> client that is currently writing to a file under the same snap root. This 
> open file was previously captured in the snapshot using the immutable open 
> file in snapshot feature "dfs.namenode.snapshot.capture.openfiles".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12316) Verify HDFS snapshot deletion doesn't crash the ongoing file writes

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131365#comment-16131365
 ] 

Hadoop QA commented on HDFS-12316:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 34s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12316 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882430/HDFS-12316.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 979959e914ad 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dd7916d |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20745/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20745/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20745/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Verify HDFS snapshot deletion doesn't crash the ongoing file writes
> ---
>
> Key: HDFS-12316
> URL: https://issues.apache.org/jira/browse/HDFS-12316
> Project: Hadoop 

[jira] [Commented] (HDFS-12316) Verify HDFS snapshot deletion doesn't crash the ongoing file writes

2017-08-17 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131343#comment-16131343
 ] 

Yongjun Zhang commented on HDFS-12316:
--

Thanks [~manojg]. +1.


> Verify HDFS snapshot deletion doesn't crash the ongoing file writes
> ---
>
> Key: HDFS-12316
> URL: https://issues.apache.org/jira/browse/HDFS-12316
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-12316.01.patch, HDFS-12316.02.patch
>
>
> Recently we encountered a case where deletion of HDFS snapshots crashed the 
> client that is currently writing to a file under the same snap root. This 
> open file was previously captured in the snapshot using the immutable open 
> file in snapshot feature "dfs.namenode.snapshot.capture.openfiles".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12295) NameNode to support file path prefix /.reserved/bypassExtAttr

2017-08-17 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131290#comment-16131290
 ] 

Yongjun Zhang commented on HDFS-12295:
--

About 2,  User doesn't manually add the prefix at user command line parameters; 
rather, it's the implementation of distcp, "hadoop fs -cp" etc that adds the 
prefix (before calling getFileStatus and listStatus). So the path string 
"inconsistency" may only appear inside HDFS core code, it may not be too bad. 
what do you think [~daryn]? Thanks.

 



> NameNode to support file path prefix /.reserved/bypassExtAttr
> -
>
> Key: HDFS-12295
> URL: https://issues.apache.org/jira/browse/HDFS-12295
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, namenode
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12295.001.patch, HDFS-12295.001.patch
>
>
> Let NameNode to support prefix /.reserved/bypassExtAttr, so client can add 
> thisprefix to a path before calling getFileStatus, e.g. /ab/c becomes 
> /.reserved/bypassExtAttr/a/b/c. NN will parse the path at the very beginning, 
> and bypass external attribute provider if the prefix is there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12313) Ozone: SCM: move container/pipeline StateMachine to the right package

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131283#comment-16131283
 ] 

Hadoop QA commented on HDFS-12313:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
23s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
30s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m  
5s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 53s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}146m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12313 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882411/HDFS-12313-HDFS-7240.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2592a6890a40 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 293c425 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20744/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20744/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20744/console |
| 

[jira] [Updated] (HDFS-12293) DataNode should log file name on disk error

2017-08-17 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12293:
--
Attachment: HDFS-12293.02.patch

> DataNode should log file name on disk error
> ---
>
> Key: HDFS-12293
> URL: https://issues.apache.org/jira/browse/HDFS-12293
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Ajay Kumar
>  Labels: newbie
> Attachments: HDFS-12293.01.patch, HDFS-12293.02.patch
>
>
> Found the following error message in precommit build 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/488/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailureReporting/testSuccessiveVolumeFailures/
> {noformat}
> 2017-08-10 09:36:53,619 [DataXceiver for client 
> DFSClient_NONMAPREDUCE_670847838_18 at /127.0.0.1:55851 [Receiving block 
> BP-219227751-172.17.0.2-1502357801473:blk_1073741829_1005]] WARN  
> datanode.DataNode (BlockReceiver.java:(287)) - IOException in 
> BlockReceiver constructor. Cause is 
> java.io.IOException: Not a directory
>   at java.io.UnixFileSystem.createFileExclusively(Native Method)
>   at java.io.File.createNewFile(File.java:1012)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider.createFile(FileIoProvider.java:302)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createFileWithExistsCheck(DatanodeUtil.java:69)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:306)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:933)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbw(FsVolumeImpl.java:1202)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1356)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:215)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver(DataXceiver.java:1291)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:758)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
> {noformat}
> It is not known what file was being created.
> What's interesting is that {{DatanodeUtil#createFileWithExistsCheck}} does 
> carry file name in the exception message, but the exception handler at 
> {{DataTransfer#run()}} and {{BlockReceiver#BlockReceiver}} ignores it:
> {code:title=BlockReceiver#BlockReceiver}
>   // check if there is a disk error
>   IOException cause = DatanodeUtil.getCauseIfDiskError(ioe);
>   DataNode.LOG.warn("IOException in BlockReceiver constructor"
>   + (cause == null ? "" : ". Cause is "), cause);
>   if (cause != null) {
> ioe = cause;
> // Volume error check moved to FileIoProvider
>   }
> {code}
> The logger should print the file name in addition to the cause.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12316) Verify HDFS snapshot deletion doesn't crash the ongoing file writes

2017-08-17 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-12316:
--
Attachment: HDFS-12316.02.patch

Thanks for the quick review [~yzhangal]. Attached v02 patch to extend the test 
case with more open files, delete open file while the stream is open and added 
some more comments. Please take a look.

> Verify HDFS snapshot deletion doesn't crash the ongoing file writes
> ---
>
> Key: HDFS-12316
> URL: https://issues.apache.org/jira/browse/HDFS-12316
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-12316.01.patch, HDFS-12316.02.patch
>
>
> Recently we encountered a case where deletion of HDFS snapshots crashed the 
> client that is currently writing to a file under the same snap root. This 
> open file was previously captured in the snapshot using the immutable open 
> file in snapshot feature "dfs.namenode.snapshot.capture.openfiles".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12316) Verify HDFS snapshot deletion doesn't crash the ongoing file writes

2017-08-17 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131255#comment-16131255
 ] 

Yongjun Zhang commented on HDFS-12316:
--

Thanks [~manojg], the patch looks good to me, except can we add some comments 
at the snapshot deletion call places to describe the intention, what problem we 
observed etc. I'm +1 other than that. 




> Verify HDFS snapshot deletion doesn't crash the ongoing file writes
> ---
>
> Key: HDFS-12316
> URL: https://issues.apache.org/jira/browse/HDFS-12316
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-12316.01.patch
>
>
> Recently we encountered a case where deletion of HDFS snapshots crashed the 
> client that is currently writing to a file under the same snap root. This 
> open file was previously captured in the snapshot using the immutable open 
> file in snapshot feature "dfs.namenode.snapshot.capture.openfiles".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12316) Verify HDFS snapshot deletion doesn't crash the ongoing file writes

2017-08-17 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-12316:
--
Status: Patch Available  (was: Open)

> Verify HDFS snapshot deletion doesn't crash the ongoing file writes
> ---
>
> Key: HDFS-12316
> URL: https://issues.apache.org/jira/browse/HDFS-12316
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-12316.01.patch
>
>
> Recently we encountered a case where deletion of HDFS snapshots crashed the 
> client that is currently writing to a file under the same snap root. This 
> open file was previously captured in the snapshot using the immutable open 
> file in snapshot feature "dfs.namenode.snapshot.capture.openfiles".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12316) Verify HDFS snapshot deletion doesn't crash the ongoing file writes

2017-08-17 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-12316:
--
Attachment: HDFS-12316.01.patch

Attached a test to verify snapshot deletions are not crashing the clients that 
are currently writing to files, which were previously captured in the same 
snapshot.
[~yzhangal], can you please review the patch?

> Verify HDFS snapshot deletion doesn't crash the ongoing file writes
> ---
>
> Key: HDFS-12316
> URL: https://issues.apache.org/jira/browse/HDFS-12316
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-12316.01.patch
>
>
> Recently we encountered a case where deletion of HDFS snapshots crashed the 
> client that is currently writing to a file under the same snap root. This 
> open file was previously captured in the snapshot using the immutable open 
> file in snapshot feature "dfs.namenode.snapshot.capture.openfiles".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12214) [SPS]: Fix review comments of StoragePolicySatisfier feature

2017-08-17 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-12214:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-10285
   Status: Resolved  (was: Patch Available)

> [SPS]: Fix review comments of StoragePolicySatisfier feature
> 
>
> Key: HDFS-12214
> URL: https://issues.apache.org/jira/browse/HDFS-12214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Fix For: HDFS-10285
>
> Attachments: HDFS-12214-HDFS-10285-00.patch, 
> HDFS-12214-HDFS-10285-01.patch, HDFS-12214-HDFS-10285-02.patch, 
> HDFS-12214-HDFS-10285-03.patch, HDFS-12214-HDFS-10285-04.patch, 
> HDFS-12214-HDFS-10285-05.patch, HDFS-12214-HDFS-10285-06.patch, 
> HDFS-12214-HDFS-10285-07.patch, HDFS-12214-HDFS-10285-08.patch
>
>
> This sub-task is to address [~andrew.wang]'s review comments. Please refer 
> the [review 
> comment|https://issues.apache.org/jira/browse/HDFS-10285?focusedCommentId=16103734=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16103734]
>  in HDFS-10285 umbrella jira.
> # Rename configuration property 'dfs.storage.policy.satisfier.activate' to 
> 'dfs.storage.policy.satisfier.enabled'
> # Disable SPS feature by default.
> # Rather than using the acronym (which a user might not know), maybe rename 
> "-isSpsRunning" to "-isSatisfierRunning"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12216) Ozone: TestKeys is failing consistently

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131239#comment-16131239
 ] 

Hadoop QA commented on HDFS-12216:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
40s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}119m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}153m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.ozone.web.client.TestKeysRatis |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.ozone.scm.node.TestQueryNode |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
|   | hadoop.ozone.container.common.impl.TestContainerPersistence |
|   | hadoop.hdfs.server.namenode.TestAuditLogs |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
| Timed out junit tests | org.apache.hadoop.cblock.TestLocalBlockCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12216 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882401/HDFS-12216-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d1cacef378a9 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 293c425 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20743/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-12214) [SPS]: Fix review comments of StoragePolicySatisfier feature

2017-08-17 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131243#comment-16131243
 ] 

Uma Maheswara Rao G commented on HDFS-12214:


Thank you Andrew. Also thanks for Brahma for comments.

I have just pushed it to branch.

> [SPS]: Fix review comments of StoragePolicySatisfier feature
> 
>
> Key: HDFS-12214
> URL: https://issues.apache.org/jira/browse/HDFS-12214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-12214-HDFS-10285-00.patch, 
> HDFS-12214-HDFS-10285-01.patch, HDFS-12214-HDFS-10285-02.patch, 
> HDFS-12214-HDFS-10285-03.patch, HDFS-12214-HDFS-10285-04.patch, 
> HDFS-12214-HDFS-10285-05.patch, HDFS-12214-HDFS-10285-06.patch, 
> HDFS-12214-HDFS-10285-07.patch, HDFS-12214-HDFS-10285-08.patch
>
>
> This sub-task is to address [~andrew.wang]'s review comments. Please refer 
> the [review 
> comment|https://issues.apache.org/jira/browse/HDFS-10285?focusedCommentId=16103734=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16103734]
>  in HDFS-10285 umbrella jira.
> # Rename configuration property 'dfs.storage.policy.satisfier.activate' to 
> 'dfs.storage.policy.satisfier.enabled'
> # Disable SPS feature by default.
> # Rather than using the acronym (which a user might not know), maybe rename 
> "-isSpsRunning" to "-isSatisfierRunning"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12159) Ozone: SCM: Add create replication pipeline RPC

2017-08-17 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131236#comment-16131236
 ] 

Anu Engineer commented on HDFS-12159:
-

[~xyao] Thanks for the review. I will commit this shortly.


> Ozone: SCM: Add create replication pipeline RPC
> ---
>
> Key: HDFS-12159
> URL: https://issues.apache.org/jira/browse/HDFS-12159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-12159-HDFS-7240.001.patch, 
> HDFS-12159-HDFS-7240.002.patch, HDFS-12159-HDFS-7240.003.patch, 
> HDFS-12159-HDFS-7240.004.patch, HDFS-12159-HDFS-7240.005.patch, 
> HDFS-12159-HDFS-7240.006.patch
>
>
> Add an API that allows users to create replication pipelines using SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12216) Ozone: TestKeys is failing consistently

2017-08-17 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131232#comment-16131232
 ] 

Anu Engineer commented on HDFS-12216:
-

I am +1 on the code changes, I am not going to commit it now since the test 
failure is still present. 
[~msingh] Can you please share your thoughts on why TestKeys is still failing?

> Ozone: TestKeys is failing consistently
> ---
>
> Key: HDFS-12216
> URL: https://issues.apache.org/jira/browse/HDFS-12216
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12216-HDFS-7240.001.patch, 
> HDFS-12216-HDFS-7240.002.patch
>
>
> TestKeys and TestKeysRatis are failing consistently as noted in test logs for 
> HDFS-12183
> TestKeysRatis is failing because of the following error
> {code}
> 2017-07-28 23:11:28,783 [StateMachineUpdater-127.0.0.1:55793] ERROR 
> impl.StateMachineUpdater (ExitUtils.java:terminate(80)) - Terminating with 
> exit status 2: StateMachineUpdater-127.0.0.1:55793: the StateMachineUpdater 
> hits Throwable
> org.iq80.leveldb.DBException: Closed
>   at org.fusesource.leveldbjni.internal.JniDB.put(JniDB.java:123)
>   at org.apache.hadoop.utils.LevelDBStore.put(LevelDBStore.java:98)
>   at 
> org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl.putKey(KeyManagerImpl.java:90)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.handlePutKey(Dispatcher.java:547)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.keyProcessHandler(Dispatcher.java:206)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.dispatch(Dispatcher.java:110)
>   at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatch(ContainerStateMachine.java:94)
>   at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:81)
>   at 
> org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:913)
>   at 
> org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:142)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> where as TestKeys is failing because of
> {code}
> 2017-07-28 23:14:20,889 [Thread-486] INFO  scm.XceiverClientManager 
> (XceiverClientManager.java:getClient(158)) - exception 
> java.util.concurrent.ExecutionException: java.net.ConnectException: 
> Connection refused: /127.0.0.1:55914
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11882) Client fails if acknowledged size is greater than bytes sent

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131192#comment-16131192
 ] 

Hadoop QA commented on HDFS-11882:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
33s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}152m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.datanode.checker.TestThrottledAsyncCheckerTimeout |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
| Timed out junit tests | 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11882 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882396/HDFS-11882.04.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3f87bcd8c838 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dd7916d |
| Default Java | 

[jira] [Commented] (HDFS-12283) Ozone: DeleteKey-5: Implement SCM DeletedBlockLog

2017-08-17 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131191#comment-16131191
 ] 

Anu Engineer commented on HDFS-12283:
-

Hi [~yuanbo] Thanks for getting this done. This is a critical part of Ozone and 
I really appreciate your work on this. Sorry, my comments are a little late. My 
are comments on v3 of the patch.

*  {{DeletedBlockLogImpl.java: Line 53}} #LATEST_TXID#  -- Do we need this 
since this table will have nothing but TXIDs , do we need to keep this under 
this specific key. In other words, since TXIDs are a monotonically increasing 
number and the key is ordered by TXIDs, I am thinking that just seeking to the 
last key is all you really need. Please let me know what I am missing here. if 
you are doing this because {{getRangeKVs}} don't support last keys, then we 
should fix that. I know that [~cheersyang] suggested this change in an earlier 
review, but in my mind, RocksDB and LevelDB are simple trees. So seekToEnd 
should be relatively efficient and I hope both the DBs have that interface.
* {{DeletedBlockLogImpl.java#commitTransactions}}  This is a hypothetical 
question. During the commitTransaction call, imagine one of the TXIDs is 
invalid. Does it abort the whole list or do you want to catch an exception for 
that TXID and continue? Not really sure if that can ever happen or if we should 
be worried about it.

* {{DeleteBlockLogImpl.java#incrementCount}}  Would like to understand what 
happens after we set this to -1;
{code}
// if the retry time exceeds the MAX_RETRY_TIME value
// then set the retry value to -1, stop retrying.
if (block.getCount() > maxRetry) {
  block.setCount(-1);
}
{code}
For the sake of argument, let us say we hit this line, and you have set the 
block.count to -1.What do we do now? Never delete the block? what if the 
machines were switched off and came back up after you set this to -1. Can I 
suggest that we start warning once it is above the maxRetry limit and write a 
log statement for every maxRetry-th hit. For example, if maxRetry is 5, we log 
a warning on 5th try, 10th try and 15th try. But never set this value to -1 and 
never give up. So to summarize, While I fully understand why you are setting 
the value to -1, It is not clear to me what happens after that.

* May be we need to add an option in SCMCLI to purge these entries -- perhaps 
file a JIRA for that. 
* {{addTransactions(Map blockMap)}} -- in this call is 
there a size limit to the list of blocks in the argument. We might want to keep 
that 50K or something like that.

* {{addTransactions}} -- batch.put(LATEST_TXID, Longs.toByteArray(latestId)); 
-- if we are seeking to the last key via a seek method, we will not need to do 
this. Please note, I am not asking if we can iterate to the last key, I want to 
seek to the last key. In terms of a tree, what I am imaging is that we will 
only walk a couple of nodes that point to the larger values of the tree.


* {{addTransactions}} I think there is a small ordering error here. Please do 
correct me if I am wrong.
{code}
215   deletedStore.writeBatch(batch);
216   this.latestTxid = latestId;
{code}
Imagine this scenario right after we execute line 215 ( I am assuming that is a 
flush to disk for time being), and the execution fails. When you come back into 
this function next time, we do this.
{code}
206 long latestId = this.latestTxid;
{code}
Now if the txid we wrote to the database was say; 105, and before we can update 
{{this.latestTxid}} we failed. We will end up reusing the TXIDs, which means 
that we will lose some TXIDs or the value part of some TXIDs. It is like going 
back in time.The fix might be to change the order of these lines.
{code}
215   this.latestTxid = latestId;
216   deletedStore.writeBatch(batch);
{code}
With this order, worse that can happen is that our TXIDs can have holes since 
we updated {{this.latestTxid}} 

* ozone.scm.block.deletion.max.retry -- use this config for logging warnings ? 
* In test -- add a testCommitTransactions -- with one TXID being invalid ? 

Nits: 
* line 62: typo or rename : largerThanLaestReadTxid --> 
largerThanLatestReadTxid or getNextTxid 
* line 158: Comments seems to refer to MAX_RETRY_TIME {{incrementCount}}, but 
no such variable exists.

> Ozone: DeleteKey-5: Implement SCM DeletedBlockLog
> -
>
> Key: HDFS-12283
> URL: https://issues.apache.org/jira/browse/HDFS-12283
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
> Attachments: HDFS-12283.001.patch, HDFS-12283-HDFS-7240.001.patch, 
> HDFS-12283-HDFS-7240.002.patch, HDFS-12283-HDFS-7240.003.patch
>
>
> The DeletedBlockLog is a persisted log in SCM to keep tracking container 
> blocks which 

[jira] [Commented] (HDFS-12225) [SPS]: Optimize extended attributes for tracking SPS movements

2017-08-17 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131166#comment-16131166
 ] 

Surendra Singh Lilhore commented on HDFS-12225:
---

Thanks [~rakeshr] for reviews. 

bq. How about renaming Candidate class, we could use StorageMovementTrackInfo 
or SatisfyTrackInfo or any better name. Also, IMHO to use isDir bool flag to 
classify dir/file and make it explicit.

I didn't got the 6th comment. In {{Candidate}} class {{childCount}} field is 
not there.  

> [SPS]: Optimize extended attributes for tracking SPS movements
> --
>
> Key: HDFS-12225
> URL: https://issues.apache.org/jira/browse/HDFS-12225
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-12225-HDFS-10285-01.patch, 
> HDFS-12225-HDFS-10285-02.patch
>
>
> We have discussed to optimize number extended attributes and asked to report 
> separate JIRA while implementing [HDFS-11150 | 
> https://issues.apache.org/jira/browse/HDFS-11150?focusedCommentId=15766127=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15766127]
> This is the JIRA to track that work 
> For the context, comment copied from HDFS-11150
> {quote}
> [~yuanbo] wrote : I've tried that before. There is an issue here if we only 
> mark the directory. When recovering from FsImage, the InodeMap isn't built 
> up, so we don't know the sub-inode of a given inode, in the end, We cannot 
> add these inodes to movement queue in FSDirectory#addToInodeMap, any 
> thoughts?{quote}
> {quote}
> [~umamaheswararao] wrote: I got what you are saying. Ok for simplicity we can 
> add for all Inodes now. For this to handle 100%, we may need intermittent 
> processing, like first we should add them to some intermittentList while 
> loading fsImage, once fully loaded and when starting active services, we 
> should process that list and do required stuff. But it would add some 
> additional complexity may be. Let's do with all file inodes now and we can 
> revisit later if it is really creating issues. How about you raise a JIRA for 
> it and think to optimize separately?
> {quote}
> {quote}
> [~andrew.wang] wrote in HDFS-10285 merge time review comment : HDFS-10899 
> also the cursor of the iterator in the EZ root xattr to track progress and 
> handle restarts. I wonder if we can do something similar here to avoid having 
> an xattr-per-file being moved.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12222) Add EC information to BlockLocation

2017-08-17 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131022#comment-16131022
 ] 

Andrew Wang commented on HDFS-1:


Thanks for picking this up Huafeng!

I liked Alex's proposal above to have the current public APIs for BlockLocation 
return just data blocks. Then rework the HDFS internals that need both data and 
parity blocks to call a new API. We might already have this separation in 
place, since the DFSClient uses the private-only LocatedBlock and 
HdfsFileStatus classes.

This sketch looks a bit different, but I think can be tweaked to fit. A couple 
review comments:

* If we agree that FileSystem users likely don't care about the details of the 
EC schema or even the parity blocks, then we don't need the 
ErasureCodedBlockLocation class. Just change the makeQualifiedLocated and 
related to just return data blocks.
* We need to be careful about strictly adding new methods when adding a new 
parameter, for compatibility.

> Add EC information to BlockLocation
> ---
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Huafeng Wang
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-1.001.patch
>
>
> HDFS applications query block location information to compute splits. One 
> example of this is FileInputFormat:
> https://github.com/apache/hadoop/blob/d4015f8628dd973c7433639451a9acc3e741d2a2/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java#L346
> You see bits of code like this that calculate offsets as follows:
> {noformat}
> long bytesInThisBlock = blkLocations[startIndex].getOffset() + 
>   blkLocations[startIndex].getLength() - offset;
> {noformat}
> EC confuses this since the block locations include parity block locations as 
> well, which are not part of the logical file length. This messes up the 
> offset calculation and thus topology/caching information too.
> Applications can figure out what's a parity block by reading the EC policy 
> and then parsing the schema, but it'd be a lot better if we exposed this more 
> generically in BlockLocation instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12313) Ozone: SCM: move container/pipeline StateMachine to the right package

2017-08-17 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130981#comment-16130981
 ] 

Anu Engineer commented on HDFS-12313:
-

+1, v4. Thanks for the update. Pending Jenkins.

> Ozone: SCM: move container/pipeline StateMachine to the right package
> -
>
> Key: HDFS-12313
> URL: https://issues.apache.org/jira/browse/HDFS-12313
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-12313-HDFS-7240.001.patch, 
> HDFS-12313-HDFS-7240.002.patch, HDFS-12313-HDFS-7240.003.patch, 
> HDFS-12313-HDFS-7240.004.patch
>
>
> HDFS-12305 added StateMachine for pipeline/container. However, the package 
> location is incorrectly put under a new top-level package hadoop-hdfs-client. 
> This was caused by my rename mistake before submit the patch.
> This ticket is opened to move it to the right package under 
> hadoop-hdfs-project/hadoop-hdfs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12313) Ozone: SCM: move container/pipeline StateMachine to the right package

2017-08-17 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12313:
--
Attachment: HDFS-12313-HDFS-7240.004.patch

> Ozone: SCM: move container/pipeline StateMachine to the right package
> -
>
> Key: HDFS-12313
> URL: https://issues.apache.org/jira/browse/HDFS-12313
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-12313-HDFS-7240.001.patch, 
> HDFS-12313-HDFS-7240.002.patch, HDFS-12313-HDFS-7240.003.patch, 
> HDFS-12313-HDFS-7240.004.patch
>
>
> HDFS-12305 added StateMachine for pipeline/container. However, the package 
> location is incorrectly put under a new top-level package hadoop-hdfs-client. 
> This was caused by my rename mistake before submit the patch.
> This ticket is opened to move it to the right package under 
> hadoop-hdfs-project/hadoop-hdfs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12313) Ozone: SCM: move container/pipeline StateMachine to the right package

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130950#comment-16130950
 ] 

Hadoop QA commented on HDFS-12313:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
16s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
16s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 14m 
10s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
28s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 59s{color} | {color:orange} root: The patch generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 49s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}129m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12313 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882380/HDFS-12313-HDFS-7240.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5d341f893866 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 293c425 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20740/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20740/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 

[jira] [Created] (HDFS-12316) Verify HDFS snapshot deletion doesn't crash the ongoing file writes

2017-08-17 Thread Manoj Govindassamy (JIRA)
Manoj Govindassamy created HDFS-12316:
-

 Summary: Verify HDFS snapshot deletion doesn't crash the ongoing 
file writes
 Key: HDFS-12316
 URL: https://issues.apache.org/jira/browse/HDFS-12316
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0-beta1
Reporter: Manoj Govindassamy
Assignee: Manoj Govindassamy


Recently we encountered a case where deletion of HDFS snapshots crashed the 
client that is currently writing to a file under the same snap root. This open 
file was previously captured in the snapshot using the immutable open file in 
snapshot feature "dfs.namenode.snapshot.capture.openfiles".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12292) Federation: Support viewfs:// schema path for DfsAdmin commands

2017-08-17 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130945#comment-16130945
 ] 

Chen Liang commented on HDFS-12292:
---

Thanks [~erofeev] for the patch! Curious though, is it possible to check that 
only when viewfs is being used we perform this path resolving? Because it seems 
to me that in the non-viewfs case, this check doesn't really do anything and we 
end up just spending extra unnecessary time for each of certain operations. 
(Please correct me if I'm wrong.)

> Federation: Support viewfs:// schema path for DfsAdmin commands
> ---
>
> Key: HDFS-12292
> URL: https://issues.apache.org/jira/browse/HDFS-12292
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Mikhail Erofeev
>Assignee: Mikhail Erofeev
> Attachments: HDFS-12292-002.patch, HDFS-12292-003.patch, 
> HDFS-12292.patch
>
>
> Motivation:
> As of now, clients need to specify a nameservice when a cluster is federated, 
> otherwise, the exception is fired:
> {code}
> hdfs dfsadmin -setQuota 10 viewfs://vfs-root/user/uname
> setQuota: FileSystem viewfs://vfs-root/ is not an HDFS file system
> # with fs.defaultFS = viewfs://vfs-root/
> hdfs dfsadmin -setQuota 10 vfs-root/user/uname
> setQuota: FileSystem viewfs://vfs-root/ is not an HDFS file system
> # works fine thanks to https://issues.apache.org/jira/browse/HDFS-11432
> hdfs dfsadmin -setQuota 10 hdfs://users-fs/user/uname
> {code}
> This creates inconvenience, inability to rely on fs.defaultFS and forces to 
> create client-side mappings for management scripts
> Implementation:
> PathData that is passed to commands should be resolved to its actual 
> FileSystem
> Result:
> ViewFS will be resolved to the actual HDFS file system



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9822) Erasure Coding: Avoids scheduling multiple reconstruction tasks for a striped block at the same time

2017-08-17 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9822:
--
Target Version/s:   (was: )
  Status: Open  (was: Patch Available)

> Erasure Coding: Avoids scheduling multiple reconstruction tasks for a striped 
> block at the same time
> 
>
> Key: HDFS-9822
> URL: https://issues.apache.org/jira/browse/HDFS-9822
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Tsz Wo Nicholas Sze
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-9822-001.patch, HDFS-9822-002.patch
>
>
> Found the following AssertionError in 
> https://builds.apache.org/job/PreCommit-HDFS-Build/14501/testReport/org.apache.hadoop.hdfs.server.namenode/TestReconstructStripedBlocks/testMissingStripedBlockWithBusyNode2/
> {code}
> AssertionError: Should wait the previous reconstruction to finish
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.validateReconstructionWork(BlockManager.java:1680)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReconstructionWorkForBlocks(BlockManager.java:1536)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeBlockReconstructionWork(BlockManager.java:1472)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4229)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4100)
>   at java.lang.Thread.run(Thread.java:745)
>   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
>   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:170)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4119)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12315) Use Path instead of String in the TestHdfsAdmin.verifyOpenFiles()

2017-08-17 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130909#comment-16130909
 ] 

Chen Liang commented on HDFS-12315:
---

Thanks [~olegd] for the catch! v001 patch LGTM.

> Use Path instead of String in the TestHdfsAdmin.verifyOpenFiles()
> -
>
> Key: HDFS-12315
> URL: https://issues.apache.org/jira/browse/HDFS-12315
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Oleg Danilov
>Priority: Trivial
> Attachments: HDFS-12315.patch
>
>
> closedFiles is a set of Path, therefor closedFiles.contains(String) doesn't 
> make sense.
> lines 252-261:
> {code:java}
>   private void verifyOpenFiles(HashSet closedFiles,
>   HashMap openFileMap) throws IOException {
> HdfsAdmin hdfsAdmin = new HdfsAdmin(FileSystem.getDefaultUri(conf), conf);
> HashSet openFiles = new HashSet<>(openFileMap.keySet());
> RemoteIterator openFilesRemoteItr =
> hdfsAdmin.listOpenFiles();
> while (openFilesRemoteItr.hasNext()) {
>   String filePath = openFilesRemoteItr.next().getFilePath();
>   assertFalse(filePath + " should not be listed under open files!",
>   closedFiles.contains(filePath));
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12162) Update listStatus document to describe the behavior when the argument is a file

2017-08-17 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130910#comment-16130910
 ] 

Yongjun Zhang commented on HDFS-12162:
--

Thanks for working on this [~ajakumar] and [~anu]. 

Sorry I was not on top of this. There are also javadocs, all listStatus methods 
only talk about directory but not file. Maybe we can create a follow-up jira to 
fix those.

Thanks.




> Update listStatus document to describe the behavior when the argument is a 
> file
> ---
>
> Key: HDFS-12162
> URL: https://issues.apache.org/jira/browse/HDFS-12162
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, httpfs
>Reporter: Yongjun Zhang
>Assignee: Ajay Kumar
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12162.01.patch, Screen Shot 2017-08-03 at 11.01.46 
> AM.png, Screen Shot 2017-08-03 at 11.02.19 AM.png
>
>
> The listStatus method can take in either directory path or file path as 
> input, however, currently both the javadoc and external document describe it 
> as only taking directory as input. This jira is to update the document about 
> the behavior when the argument is a file path.
> Thanks [~xiaochen] for the review and discussion in HDFS-12139, creating this 
> jira is the result of our discussion there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-8225) EC client code should not print info log message

2017-08-17 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze reassigned HDFS-8225:
-

Assignee: (was: Tsz Wo Nicholas Sze)

I actually won't be able to work on this nice-to-have JIRA.  Un-assigning 
myself.

> EC client code should not print info log message
> 
>
> Key: HDFS-8225
> URL: https://issues.apache.org/jira/browse/HDFS-8225
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Tsz Wo Nicholas Sze
>  Labels: hdfs-ec-3.0-nice-to-have
>
> There are many LOG.info(..) calls in the code.  We should either remove them 
> or change the log level.  Users don't want to see any log message on the 
> screen when running the client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-08-17 Thread George Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130850#comment-16130850
 ] 

George Huang edited comment on HDFS-11912 at 8/17/17 5:42 PM:
--

Test was having a 10 min timeout. However, it took more than 10 mins to create 
5000 test files in HDFS:

2017-08-17 03:12:54,076 [Thread-126] INFO  common.Storage 
(Storage.java:tryLock(847)) - Lock on /testptch/hadoop/hadoop-hdf
...[truncated 9653305 chars]...
wed=trueugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1   cmd=create  
src=/WITNESSDIR/1720/1719/1718/1717/1716/1715/1714/1713/file1720
dst=nullperm=jenkins:supergroup:rw-r--r--   proto=rpc
2017-08-17 03:22:52,582 [IPC Server handler 6 on 40751] INFO  hdfs.StateChange 
(FSDirWriteFileOp.java:logAllocatedBlock(787)) - BLOCK* allocate 
blk_1073745266_4442, replicas=127.0.0.1:43960 for 
/WITNESSDIR/1720/1719/1718/1717/1716/1715/1714/1713/file1720
2017-08-17 03:22:52,583 [DataXceiver for client 
DFSClient_NONMAPREDUCE_1460336802_1 at /127.0.0.1:53834 [Receiving block 
BP-233349655-172.17.0.2-1502939568997:blk_1073745266_4442]] INFO  
datanode.DataNode (DataXceiver.java:writeBlock(742)) - Receiving 
BP-233349655-172.17.0.2-1502939568997:blk_1073745266_4442 src: /127.0.0.1:53834 
dest: /127.0.0.1:43960
2017-08-17 03:22:52,586 [PacketResponder: 
BP-233349655-172.17.0.2-1502939568997:blk_1073745266_4442, 
type=LAST_IN_PIPELINE] INFO  DataNode.clienttrace 
(BlockReceiver.java:finalizeBlock(1523)) - src: /127.0.0.1:53834, dest: 
/127.0.0.1:43960, bytes: 69, op: HDFS_WRITE, cliID: 
DFSClient_NONMAPREDUCE_1460336802_1, offset: 0, srvID: 
ae9a2151-cb92-461b-8d73-cd9641184228, blockid: 
BP-233349655-172.17.0.2-1502939568997:blk_1073745266_4442, duration(ns): 1413179
2017-08-17 03:22:52,586 [PacketResponder: 
BP-233349655-172.17.0.2-1502939568997:blk_1073745266_4442, 
type=LAST_IN_PIPELINE] INFO  datanode.DataNode (BlockReceiver.java:run(1496)) - 
PacketResponder: BP-233349655-172.17.0.2-1502939568997:blk_1073745266_4442, 
type=LAST_IN_PIPELINE terminating
2017-08-17 03:22:52,590 [IPC Server handler 4 on 40751] INFO  hdfs.StateChange 
(FSNamesystem.java:completeFile(2755)) - DIR* completeFile: 
/WITNESSDIR/1720/1719/1718/1717/1716/1715/1714/1713/file1720 is closed by 
DFSClient_NONMAPREDUCE_1460336802_1
2017-08-17 03:22:52,600 [IPC Server handler 5 on 40751] INFO  
FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7512)) - allowed=true 
  ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1   cmd=getfileinfo 
src=/WITNESSDIR/1720/1719/1718/1717/1716/1715/1714/1713/file1720
dst=nullperm=null   proto=rpc
2017-08-17 03:22:52,601 [Thread-173] INFO  snapshot.TestRandomOpsWithSnapshots 
(TestRandomOpsWithSnapshots.java:createFiles(634)) - createFiles, file:


was (Author: ghuangups):
Test was having a 10 min timeout. However, setup related operations took almost 
10 mins and hence left no time for test to finish. Test starts at around 03:12, 
but it reaches to actual test at around 03:22:  :(

2017-08-17 03:12:47,798 [main] INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:(469)) - starting cluster: numNameNodes=1, 
numDataNodes=3
Formatting using clusterid: testClusterID
:
:
2017-08-17 03:12:54,076 [Thread-126] INFO  common.Storage 
(Storage.java:tryLock(847)) - Lock on /testptch/hadoop/hadoop-hdf
...[truncated 9653305 chars]...
wed=trueugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1   cmd=create  
src=/WITNESSDIR/1720/1719/1718/1717/1716/1715/1714/1713/file1720
dst=nullperm=jenkins:supergroup:rw-r--r--   proto=rpc
2017-08-17 03:22:52,582 [IPC Server handler 6 on 40751] INFO  hdfs.StateChange 
:
:



> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Assignee: George Huang
>Priority: Minor
>  Labels: TestGap
> Attachments: HDFS-11912.001.patch, HDFS-11912.002.patch, 
> HDFS-11912.003.patch, HDFS-11912.004.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12216) Ozone: TestKeys is failing consistently

2017-08-17 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12216:
-
Attachment: HDFS-12216-HDFS-7240.002.patch

> Ozone: TestKeys is failing consistently
> ---
>
> Key: HDFS-12216
> URL: https://issues.apache.org/jira/browse/HDFS-12216
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12216-HDFS-7240.001.patch, 
> HDFS-12216-HDFS-7240.002.patch
>
>
> TestKeys and TestKeysRatis are failing consistently as noted in test logs for 
> HDFS-12183
> TestKeysRatis is failing because of the following error
> {code}
> 2017-07-28 23:11:28,783 [StateMachineUpdater-127.0.0.1:55793] ERROR 
> impl.StateMachineUpdater (ExitUtils.java:terminate(80)) - Terminating with 
> exit status 2: StateMachineUpdater-127.0.0.1:55793: the StateMachineUpdater 
> hits Throwable
> org.iq80.leveldb.DBException: Closed
>   at org.fusesource.leveldbjni.internal.JniDB.put(JniDB.java:123)
>   at org.apache.hadoop.utils.LevelDBStore.put(LevelDBStore.java:98)
>   at 
> org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl.putKey(KeyManagerImpl.java:90)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.handlePutKey(Dispatcher.java:547)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.keyProcessHandler(Dispatcher.java:206)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.dispatch(Dispatcher.java:110)
>   at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatch(ContainerStateMachine.java:94)
>   at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:81)
>   at 
> org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:913)
>   at 
> org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:142)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> where as TestKeys is failing because of
> {code}
> 2017-07-28 23:14:20,889 [Thread-486] INFO  scm.XceiverClientManager 
> (XceiverClientManager.java:getClient(158)) - exception 
> java.util.concurrent.ExecutionException: java.net.ConnectException: 
> Connection refused: /127.0.0.1:55914
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9822) Erasure Coding: Avoids scheduling multiple reconstruction tasks for a striped block at the same time

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130881#comment-16130881
 ] 

Hadoop QA commented on HDFS-9822:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-9822 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-9822 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791223/HDFS-9822-002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20742/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Erasure Coding: Avoids scheduling multiple reconstruction tasks for a striped 
> block at the same time
> 
>
> Key: HDFS-9822
> URL: https://issues.apache.org/jira/browse/HDFS-9822
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Tsz Wo Nicholas Sze
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-9822-001.patch, HDFS-9822-002.patch
>
>
> Found the following AssertionError in 
> https://builds.apache.org/job/PreCommit-HDFS-Build/14501/testReport/org.apache.hadoop.hdfs.server.namenode/TestReconstructStripedBlocks/testMissingStripedBlockWithBusyNode2/
> {code}
> AssertionError: Should wait the previous reconstruction to finish
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.validateReconstructionWork(BlockManager.java:1680)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReconstructionWorkForBlocks(BlockManager.java:1536)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeBlockReconstructionWork(BlockManager.java:1472)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4229)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4100)
>   at java.lang.Thread.run(Thread.java:745)
>   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
>   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:170)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4119)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12313) Ozone: SCM: move container/pipeline StateMachine to the right package

2017-08-17 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130875#comment-16130875
 ] 

Xiaoyu Yao commented on HDFS-12313:
---

Agree, I will move it to the proper client location so that it can be shared by 
both client and server. 

> Ozone: SCM: move container/pipeline StateMachine to the right package
> -
>
> Key: HDFS-12313
> URL: https://issues.apache.org/jira/browse/HDFS-12313
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-12313-HDFS-7240.001.patch, 
> HDFS-12313-HDFS-7240.002.patch, HDFS-12313-HDFS-7240.003.patch
>
>
> HDFS-12305 added StateMachine for pipeline/container. However, the package 
> location is incorrectly put under a new top-level package hadoop-hdfs-client. 
> This was caused by my rename mistake before submit the patch.
> This ticket is opened to move it to the right package under 
> hadoop-hdfs-project/hadoop-hdfs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12313) Ozone: SCM: move container/pipeline StateMachine to the right package

2017-08-17 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130862#comment-16130862
 ] 

Anu Engineer commented on HDFS-12313:
-

I thought you wanted to put in client since the stuff you put there is 
available both on server and client. Plus, the class is a very generic one that 
can be used everywhere. I am ok, if you want to move it to server, but just 
flagging that it is ok where it is.


> Ozone: SCM: move container/pipeline StateMachine to the right package
> -
>
> Key: HDFS-12313
> URL: https://issues.apache.org/jira/browse/HDFS-12313
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-12313-HDFS-7240.001.patch, 
> HDFS-12313-HDFS-7240.002.patch, HDFS-12313-HDFS-7240.003.patch
>
>
> HDFS-12305 added StateMachine for pipeline/container. However, the package 
> location is incorrectly put under a new top-level package hadoop-hdfs-client. 
> This was caused by my rename mistake before submit the patch.
> This ticket is opened to move it to the right package under 
> hadoop-hdfs-project/hadoop-hdfs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8387) Erasure Coding: Revisit the long and int datatypes usage in striping logic

2017-08-17 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130859#comment-16130859
 ] 

Andrew Wang commented on HDFS-8387:
---

Thanks for identifying and analyzing these cases, Rakesh! For the first case, 
since we mod by stripeSize (which is an int) the result can't be bigger than an 
int. So this seems okay too?

> Erasure Coding: Revisit the long and int datatypes usage in striping logic
> --
>
> Key: HDFS-8387
> URL: https://issues.apache.org/jira/browse/HDFS-8387
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: hdfs-ec-3.0-nice-to-have
>
> This idea of this jira is to revisit the usage of {{long}} and {{int}} data 
> types in the striping logic.
> Related discussion 
> [here|https://issues.apache.org/jira/browse/HDFS-8294?focusedCommentId=14540788=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14540788]
>  in HDFS-8294



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9822) Erasure Coding: Avoids scheduling multiple reconstruction tasks for a striped block at the same time

2017-08-17 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130860#comment-16130860
 ] 

Andrew Wang commented on HDFS-9822:
---

Hey Sammi, are you still planning to work on this for beta1?

> Erasure Coding: Avoids scheduling multiple reconstruction tasks for a striped 
> block at the same time
> 
>
> Key: HDFS-9822
> URL: https://issues.apache.org/jira/browse/HDFS-9822
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Tsz Wo Nicholas Sze
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-9822-001.patch, HDFS-9822-002.patch
>
>
> Found the following AssertionError in 
> https://builds.apache.org/job/PreCommit-HDFS-Build/14501/testReport/org.apache.hadoop.hdfs.server.namenode/TestReconstructStripedBlocks/testMissingStripedBlockWithBusyNode2/
> {code}
> AssertionError: Should wait the previous reconstruction to finish
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.validateReconstructionWork(BlockManager.java:1680)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReconstructionWorkForBlocks(BlockManager.java:1536)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeBlockReconstructionWork(BlockManager.java:1472)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4229)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4100)
>   at java.lang.Thread.run(Thread.java:745)
>   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
>   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:170)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4119)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-08-17 Thread George Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130850#comment-16130850
 ] 

George Huang commented on HDFS-11912:
-

Test was having a 10 min timeout. However, setup related operations took almost 
10 mins and hence left no time for test to finish. Test starts at around 03:12, 
but it reaches to actual test at around 03:22:  :(

2017-08-17 03:12:47,798 [main] INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:(469)) - starting cluster: numNameNodes=1, 
numDataNodes=3
Formatting using clusterid: testClusterID
:
:
2017-08-17 03:12:54,076 [Thread-126] INFO  common.Storage 
(Storage.java:tryLock(847)) - Lock on /testptch/hadoop/hadoop-hdf
...[truncated 9653305 chars]...
wed=trueugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1   cmd=create  
src=/WITNESSDIR/1720/1719/1718/1717/1716/1715/1714/1713/file1720
dst=nullperm=jenkins:supergroup:rw-r--r--   proto=rpc
2017-08-17 03:22:52,582 [IPC Server handler 6 on 40751] INFO  hdfs.StateChange 
:
:



> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Assignee: George Huang
>Priority: Minor
>  Labels: TestGap
> Attachments: HDFS-11912.001.patch, HDFS-11912.002.patch, 
> HDFS-11912.003.patch, HDFS-11912.004.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11882) Client fails if acknowledged size is greater than bytes sent

2017-08-17 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reassigned HDFS-11882:
--

Assignee: Andrew Wang  (was: Akira Ajisaka)

> Client fails if acknowledged size is greater than bytes sent
> 
>
> Key: HDFS-11882
> URL: https://issues.apache.org/jira/browse/HDFS-11882
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, test
>Reporter: Akira Ajisaka
>Assignee: Andrew Wang
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11882.01.patch, HDFS-11882.02.patch, 
> HDFS-11882.03.patch, HDFS-11882.04.patch, HDFS-11882.regressiontest.patch
>
>
> Some tests of erasure coding fails by the following exception. The following 
> test was removed by HDFS-11823, however, this type of error can happen in 
> real cluster.
> {noformat}
> Running 
> org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
> Tests run: 14, Failures: 0, Errors: 1, Skipped: 10, Time elapsed: 89.086 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
> testMultipleDatanodeFailure56(org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure)
>   Time elapsed: 38.831 sec  <<< ERROR!
> java.lang.IllegalStateException: null
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:129)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.updatePipeline(DFSStripedOutputStream.java:780)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:664)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1034)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:472)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTestWithMultipleFailure(TestDFSStripedOutputStreamWithFailure.java:381)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testMultipleDatanodeFailure56(TestDFSStripedOutputStreamWithFailure.java:245)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11813) TestDFSStripedOutputStreamWithFailure070 failed randomly

2017-08-17 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-11813.

Resolution: Duplicate

This looks the same as HDFS-11882 where we've got a patch that seems close, 
let's dupe to that one.

> TestDFSStripedOutputStreamWithFailure070 failed randomly
> 
>
> Key: HDFS-11813
> URL: https://issues.apache.org/jira/browse/HDFS-11813
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
>
> TestDFSStripedOutputStreamWithFailure070 failed randomly. Here is the stack 
> trace,
> java.lang.AssertionError: failed, dn=0, 
> length=1638400java.lang.IllegalStateException
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:129)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.updatePipeline(DFSStripedOutputStream.java:780)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:664)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1034)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:472)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:360)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.run(TestDFSStripedOutputStreamWithFailure.java:574)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.test7(TestDFSStripedOutputStreamWithFailure.java:614)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:365)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.run(TestDFSStripedOutputStreamWithFailure.java:574)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.test7(TestDFSStripedOutputStreamWithFailure.java:614)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11882) Client fails if acknowledged size is greater than bytes sent

2017-08-17 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11882:
---
Attachment: HDFS-11882.04.patch

Thanks for the patch Akira, fixed the checkstyles and addressed your review 
comment.

> Client fails if acknowledged size is greater than bytes sent
> 
>
> Key: HDFS-11882
> URL: https://issues.apache.org/jira/browse/HDFS-11882
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11882.01.patch, HDFS-11882.02.patch, 
> HDFS-11882.03.patch, HDFS-11882.04.patch, HDFS-11882.regressiontest.patch
>
>
> Some tests of erasure coding fails by the following exception. The following 
> test was removed by HDFS-11823, however, this type of error can happen in 
> real cluster.
> {noformat}
> Running 
> org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
> Tests run: 14, Failures: 0, Errors: 1, Skipped: 10, Time elapsed: 89.086 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
> testMultipleDatanodeFailure56(org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure)
>   Time elapsed: 38.831 sec  <<< ERROR!
> java.lang.IllegalStateException: null
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:129)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.updatePipeline(DFSStripedOutputStream.java:780)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:664)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1034)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:472)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTestWithMultipleFailure(TestDFSStripedOutputStreamWithFailure.java:381)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testMultipleDatanodeFailure56(TestDFSStripedOutputStreamWithFailure.java:245)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12250) Reduce usage of FsPermissionExtension in unit tests

2017-08-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130834#comment-16130834
 ] 

Hudson commented on HDFS-12250:
---

ABORTED: Integrated in Jenkins build Hadoop-trunk-Commit #12204 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12204/])
HDFS-12250. Reduce usage of FsPermissionExtension in unit tests. (wang: rev 
dd7916d3cd5d880d0b257d229f43f10feff04c93)
* (edit) 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyListingFileStatus.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/filecache/ClientDistributedCacheManager.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Ls.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/AclCommands.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java


> Reduce usage of FsPermissionExtension in unit tests
> ---
>
> Key: HDFS-12250
> URL: https://issues.apache.org/jira/browse/HDFS-12250
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0-alpha4
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Minor
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12250.000.patch, HDFS-12250.001.patch, 
> HDFS-12250.002.patch
>
>
> HDFS-6984 deprecated FsPermissionExtension, moving the flags to FileStatus. 
> This generated a large number of deprecation warnings, particularly in unit 
> tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12303) Change default EC cell size to 1MB for better performance

2017-08-17 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130821#comment-16130821
 ] 

Andrew Wang commented on HDFS-12303:


Hi Wei, thanks for working on this! A single review comment:

* I notice there's still an old reference to XOR-64k in TestDistCpUtils

Toggling the JIRA status doesn't retrigger precommit, you'd need to re-upload 
the same patch. I ran a few of the failed tests and saw that they succeeded. 
Hoping it's just a flake.

> Change default EC cell size to 1MB for better performance
> -
>
> Key: HDFS-12303
> URL: https://issues.apache.org/jira/browse/HDFS-12303
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12303.00.patch
>
>
> As discussed in HDFS-11814, 1MB cell size shows better performance than 
> others during the tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12250) Reduce usage of FsPermissionExtension in unit tests

2017-08-17 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12250:
---
Summary: Reduce usage of FsPermissionExtension in unit tests  (was: Reduce 
usage of FsPermissionExtension in HDFS unit tests)

> Reduce usage of FsPermissionExtension in unit tests
> ---
>
> Key: HDFS-12250
> URL: https://issues.apache.org/jira/browse/HDFS-12250
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0-alpha4
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Minor
> Attachments: HDFS-12250.000.patch, HDFS-12250.001.patch, 
> HDFS-12250.002.patch
>
>
> HDFS-6984 deprecated FsPermissionExtension, moving the flags to FileStatus. 
> This generated a large number of deprecation warnings, particularly in unit 
> tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12214) [SPS]: Fix review comments of StoragePolicySatisfier feature

2017-08-17 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130781#comment-16130781
 ] 

Andrew Wang commented on HDFS-12214:


Quick skim looks good to me, thanks Rakesh, Uma!

> [SPS]: Fix review comments of StoragePolicySatisfier feature
> 
>
> Key: HDFS-12214
> URL: https://issues.apache.org/jira/browse/HDFS-12214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-12214-HDFS-10285-00.patch, 
> HDFS-12214-HDFS-10285-01.patch, HDFS-12214-HDFS-10285-02.patch, 
> HDFS-12214-HDFS-10285-03.patch, HDFS-12214-HDFS-10285-04.patch, 
> HDFS-12214-HDFS-10285-05.patch, HDFS-12214-HDFS-10285-06.patch, 
> HDFS-12214-HDFS-10285-07.patch, HDFS-12214-HDFS-10285-08.patch
>
>
> This sub-task is to address [~andrew.wang]'s review comments. Please refer 
> the [review 
> comment|https://issues.apache.org/jira/browse/HDFS-10285?focusedCommentId=16103734=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16103734]
>  in HDFS-10285 umbrella jira.
> # Rename configuration property 'dfs.storage.policy.satisfier.activate' to 
> 'dfs.storage.policy.satisfier.enabled'
> # Disable SPS feature by default.
> # Rather than using the acronym (which a user might not know), maybe rename 
> "-isSpsRunning" to "-isSatisfierRunning"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12250) Reduce usage of FsPermissionExtension in unit tests

2017-08-17 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130776#comment-16130776
 ] 

Andrew Wang commented on HDFS-12250:


Committed to trunk, thank you for the contribution!

> Reduce usage of FsPermissionExtension in unit tests
> ---
>
> Key: HDFS-12250
> URL: https://issues.apache.org/jira/browse/HDFS-12250
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0-alpha4
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Minor
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12250.000.patch, HDFS-12250.001.patch, 
> HDFS-12250.002.patch
>
>
> HDFS-6984 deprecated FsPermissionExtension, moving the flags to FileStatus. 
> This generated a large number of deprecation warnings, particularly in unit 
> tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12250) Reduce usage of FsPermissionExtension in unit tests

2017-08-17 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12250:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

> Reduce usage of FsPermissionExtension in unit tests
> ---
>
> Key: HDFS-12250
> URL: https://issues.apache.org/jira/browse/HDFS-12250
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0-alpha4
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Minor
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12250.000.patch, HDFS-12250.001.patch, 
> HDFS-12250.002.patch
>
>
> HDFS-6984 deprecated FsPermissionExtension, moving the flags to FileStatus. 
> This generated a large number of deprecation warnings, particularly in unit 
> tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12250) Reduce usage of FsPermissionExtension in HDFS unit tests

2017-08-17 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130774#comment-16130774
 ] 

Andrew Wang commented on HDFS-12250:


LGTM +1 thanks Chris!

> Reduce usage of FsPermissionExtension in HDFS unit tests
> 
>
> Key: HDFS-12250
> URL: https://issues.apache.org/jira/browse/HDFS-12250
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0-alpha4
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Minor
> Attachments: HDFS-12250.000.patch, HDFS-12250.001.patch, 
> HDFS-12250.002.patch
>
>
> HDFS-6984 deprecated FsPermissionExtension, moving the flags to FileStatus. 
> This generated a large number of deprecation warnings, particularly in unit 
> tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12250) Reduce usage of FsPermissionExtension in HDFS unit tests

2017-08-17 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reassigned HDFS-12250:
--

Assignee: Chris Douglas

> Reduce usage of FsPermissionExtension in HDFS unit tests
> 
>
> Key: HDFS-12250
> URL: https://issues.apache.org/jira/browse/HDFS-12250
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0-alpha4
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Minor
> Attachments: HDFS-12250.000.patch, HDFS-12250.001.patch, 
> HDFS-12250.002.patch
>
>
> HDFS-6984 deprecated FsPermissionExtension, moving the flags to FileStatus. 
> This generated a large number of deprecation warnings, particularly in unit 
> tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12313) Ozone: SCM: move container/pipeline StateMachine to the right package

2017-08-17 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12313:
--
Attachment: HDFS-12313-HDFS-7240.003.patch

Attach a patch that fixed the Jenkins issue.

> Ozone: SCM: move container/pipeline StateMachine to the right package
> -
>
> Key: HDFS-12313
> URL: https://issues.apache.org/jira/browse/HDFS-12313
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-12313-HDFS-7240.001.patch, 
> HDFS-12313-HDFS-7240.002.patch, HDFS-12313-HDFS-7240.003.patch
>
>
> HDFS-12305 added StateMachine for pipeline/container. However, the package 
> location is incorrectly put under a new top-level package hadoop-hdfs-client. 
> This was caused by my rename mistake before submit the patch.
> This ticket is opened to move it to the right package under 
> hadoop-hdfs-project/hadoop-hdfs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12295) NameNode to support file path prefix /.reserved/bypassExtAttr

2017-08-17 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129986#comment-16129986
 ] 

Yongjun Zhang edited comment on HDFS-12295 at 8/17/17 2:44 PM:
---

Hi [~daryn],

The proposed solution here tries to address distcp, your comment made me aware 
of that "hadoop fs -cp" would have the same problem to solve. Thanks again for 
that.

There are several proposals so far:

1. HDFS-12202, add a new set of interface to getFileStatus and listStatus, call 
this set of interface when needed to solve the problem (distcp, "hadoop fs -cp" 
etc)
Pros: clear interface, no confusion
Cons: change is very wide. Have to introduce dummy implementation for 
FileSystems that don't support attribute provider.

2. HDFS-12294, encode the additional parameter to the path string itself, and 
extract the prefix from path string. And add the prefix when needed to solve 
the problem (distcp, "hadoop fs -cp" etc)
Pros: no need to change FileSystem interface
Cons: inconsistent path string at different places potentially. Since the 
prefix is only relevant to certain operations.

3. let the external attribute provider to fall through to HDFS if it's a 
certain user. This is discussed in HDFS-12202 comment. 
 Pros: maybe simpler to implement
Cons: potentially won't work (since the same user may want to get data from 
attribute provider, and other user need to run distcp and "hadoop fs -cp" too)

[~daryn], [~chris.douglas], [~asuresh], [~andrew.wang], [~manojg], thanks for 
your comment earlier, do you think my summary above is reasonable? any better 
idea or further thoughts to share?  

Really appreciate it.








was (Author: yzhangal):
Hi [~daryn],

The proposed solution here tries to address distcp, your comment made me aware 
of that "hadoop fs -cp" would have the same problem to solve. Thanks again for 
that.

There are several proposals so far:

1. HDFS-12202, add a new set of interface to getFileStatus and listStatus, call 
this set of interface when needed to solve the problem (distcp, "hadoop fs -cp" 
etc)
Pros: clear interface, no confusion
Cons: change is too wide. Have to introduce dummy implementation for 
FileSystems that don't support attribute provider.

2. HDFS-12294, encode the additional parameter to the path string itself, and 
extract the prefix from path string. And add the prefix when needed to solve 
the problem (distcp, "hadoop fs -cp" etc)
Pros: no need to change FileSystem interface
Cons: inconsistent path string at different places potentially. Since the 
prefix is only relevant to certain operations.

3. let the external attribute provider to fall through to HDFS if it's a 
certain user. This is discussed in HDFS-12202 comment. 
 Pros: maybe simpler to implement
Cons: potentially won't work (since the same user may want to get data from 
attribute provider, and other user need to run distcp and "hadoop fs -cp" too)

[~daryn], [~chris.douglas], [~asuresh], [~andrew.wang], [~manojg], thanks for 
your comment earlier, do you think my summary above is reasonable? any better 
idea or further thoughts to share?  

Really appreciate it.







> NameNode to support file path prefix /.reserved/bypassExtAttr
> -
>
> Key: HDFS-12295
> URL: https://issues.apache.org/jira/browse/HDFS-12295
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, namenode
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12295.001.patch, HDFS-12295.001.patch
>
>
> Let NameNode to support prefix /.reserved/bypassExtAttr, so client can add 
> thisprefix to a path before calling getFileStatus, e.g. /ab/c becomes 
> /.reserved/bypassExtAttr/a/b/c. NN will parse the path at the very beginning, 
> and bypass external attribute provider if the prefix is there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12280) Ozone: TestOzoneContainer#testCreateOzoneContainer fails

2017-08-17 Thread Lokesh Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130440#comment-16130440
 ] 

Lokesh Jain commented on HDFS-12280:


Hi

The error occurs if Datanode is running on the node on which we are running the 
test. The port number 50011 is already in use in such a case. For the datanode 
in MiniOzoneCluster to use different port either property "dfs.container.ipc" 
needs to be set or property "dfs.container.ipc.random.port" needs to be set to 
True.

> Ozone: TestOzoneContainer#testCreateOzoneContainer fails
> 
>
> Key: HDFS-12280
> URL: https://issues.apache.org/jira/browse/HDFS-12280
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Lokesh Jain
>
> {{org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer#testCreateOzoneContainer}}
>  fails with the below error
> {code}
> Running org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 64.507 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer
> testCreateOzoneContainer(org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer)
>   Time elapsed: 64.44 sec  <<< ERROR!
> java.io.IOException: Failed to start MiniOzoneCluster
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:370)
>   at 
> org.apache.hadoop.ozone.MiniOzoneCluster.waitOzoneReady(MiniOzoneCluster.java:239)
>   at 
> org.apache.hadoop.ozone.MiniOzoneCluster$Builder.build(MiniOzoneCluster.java:422)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer.testCreateOzoneContainer(TestOzoneContainer.java:62)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12283) Ozone: DeleteKey-5: Implement SCM DeletedBlockLog

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130411#comment-16130411
 ] 

Hadoop QA commented on HDFS-12283:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 4s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
16s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 38s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.scm.TestArchive |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
| Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeysRatis |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12283 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882328/HDFS-12283-HDFS-7240.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  |
| uname | Linux 14fea562af41 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 293c425 |
| 

[jira] [Commented] (HDFS-12216) Ozone: TestKeys is failing consistently

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130345#comment-16130345
 ] 

Hadoop QA commented on HDFS-12216:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 3s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 20s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestFileAppend |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.ozone.web.client.TestKeysRatis |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.ozone.scm.node.TestQueryNode |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.TestDataTransferKeepalive |
|   | hadoop.hdfs.TestDFSClientExcludedNodes |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
|   | hadoop.hdfs.TestLeaseRecovery |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12216 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882324/HDFS-12216-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6042cf7971c5 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Commented] (HDFS-12039) Ozone: Implement update volume owner in ozone shell

2017-08-17 Thread Lokesh Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130304#comment-16130304
 ] 

Lokesh Jain commented on HDFS-12039:


Hi

The volume owner can be updated using the -user option. For example the 
following command works.

bin/hdfs oz -updateVolume http://localhost:9864/vol1 -user xyz -root
bin/hdfs oz -infoVolume http://localhost:9864/vol1 -root
{
  "owner" : {
"name" : "xyz"
  },
  "quota" : {
"unit" : "GB",
"size" : 1
  },
  "volumeName" : "vol1",
  "createdOn" : "Tue, 15 Aug 2017 02:52:10 GMT",
  "createdBy" : "hdfs"
}

> Ozone: Implement update volume owner in ozone shell
> ---
>
> Key: HDFS-12039
> URL: https://issues.apache.org/jira/browse/HDFS-12039
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Lokesh Jain
>
> Ozone shell command {{updateVolume}} should support to update the owner of a 
> volume, using following syntax
> {code}
> hdfs oz -updateVolume http://ozone1.fyre.ibm.com:9864/volume-wwei-0 -owner 
> xyz -root
> {code}
> this could work from rest api, following command could change the volume 
> owner to {{www}}
> {code}
> curl -X PUT -H "Date: Mon, 26 Jun 2017 04:23:30 GMT" -H "x-ozone-version: v1" 
> -H "x-ozone-user:www" -H "Authorization:OZONE root" 
> http://ozone1.fyre.ibm.com:9864/volume-wwei-0
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12280) Ozone: TestOzoneContainer#testCreateOzoneContainer fails

2017-08-17 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh reassigned HDFS-12280:


Assignee: Lokesh Jain

> Ozone: TestOzoneContainer#testCreateOzoneContainer fails
> 
>
> Key: HDFS-12280
> URL: https://issues.apache.org/jira/browse/HDFS-12280
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Lokesh Jain
>
> {{org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer#testCreateOzoneContainer}}
>  fails with the below error
> {code}
> Running org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 64.507 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer
> testCreateOzoneContainer(org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer)
>   Time elapsed: 64.44 sec  <<< ERROR!
> java.io.IOException: Failed to start MiniOzoneCluster
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:370)
>   at 
> org.apache.hadoop.ozone.MiniOzoneCluster.waitOzoneReady(MiniOzoneCluster.java:239)
>   at 
> org.apache.hadoop.ozone.MiniOzoneCluster$Builder.build(MiniOzoneCluster.java:422)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer.testCreateOzoneContainer(TestOzoneContainer.java:62)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12039) Ozone: Implement update volume owner in ozone shell

2017-08-17 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain reassigned HDFS-12039:
--

Assignee: Lokesh Jain  (was: Mukul Kumar Singh)

> Ozone: Implement update volume owner in ozone shell
> ---
>
> Key: HDFS-12039
> URL: https://issues.apache.org/jira/browse/HDFS-12039
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Lokesh Jain
>
> Ozone shell command {{updateVolume}} should support to update the owner of a 
> volume, using following syntax
> {code}
> hdfs oz -updateVolume http://ozone1.fyre.ibm.com:9864/volume-wwei-0 -owner 
> xyz -root
> {code}
> this could work from rest api, following command could change the volume 
> owner to {{www}}
> {code}
> curl -X PUT -H "Date: Mon, 26 Jun 2017 04:23:30 GMT" -H "x-ozone-version: v1" 
> -H "x-ozone-user:www" -H "Authorization:OZONE root" 
> http://ozone1.fyre.ibm.com:9864/volume-wwei-0
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12225) [SPS]: Optimize extended attributes for tracking SPS movements

2017-08-17 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130299#comment-16130299
 ] 

Rakesh R commented on HDFS-12225:
-

Good work [~surendrasingh]. Adding few comments on the patch, please take a 
look at it.
# Please remove unused variable in {{BlockManager.java}}
{code}
  /**
   * Whether HA is enabled.
   */
  private final boolean haEnabled;
{code}
# Please add log message to know the thread exit path, would be helpful to 
admins.
# Name the thread -> {{new Daemon(new PendingSPSTaskScanner())}}
# Rehrase {{// Maybe file got deleted, from the queue}} to {{// File doesn't 
exists (maybe got deleted), remove trackId from the queue}}
# Typo: {{stratify the policy.}} -> {{satisfy the policy.}}
# Would be good to unify the names for better code maintenance. So far, used 
trackId or trackinfo to represent the storage movement needed item. How about 
renaming {{Candidate}} class, we could use {{StorageMovementTrackInfo}} or 
{{SatisfyTrackInfo}} or any better name. Also, IMHO to use {{isDir}} bool flag 
to classify dir/file and make it explicit.
{code}
  static class SatisfyTrackInfo {
private final Long trackId;
// Value will be 0 for a file path. Value will be >= 0 for a dir path.
private final Long childCount;
// true represents dir path, false otherwise.
private final boolean isDir;
  }
{code}
# Please ensure the {{pendingWorkForDirectory}} is cleaned up during 
{{SPS#postBlkStorageMovementCleanup}}.
# Please take care and update existing javadocs, below are few occurrences:
{code}
   * @param id
   *  - file block collection id.
   */
  public void satisfyStoragePolicy(Long inodeId, List candidates) {


   * @param blockCollectionID
   *  - tracking id / block collection id
   * @param allBlockLocsAttemptedToSatisfy
   *  - failed to find matching target nodes to satisfy storage type for
   *  all the block locations of the given blockCollectionID
   */
  public void add(Candidate candidate,
  boolean allBlockLocsAttemptedToSatisfy) {


private ItemInfo(long lastAttemptedOrReportedTime, Long parentId,
boolean allBlockLocsAttemptedToSatisfy) {
{code}
# Make unit test more stable by replacing constant sleeping 
{{Thread.sleep(6000);}} by sliced sleeping of 250/100 millis in a loop and 
recheck for null. Maybe, you could try {{DFSTestUtil.waitForXattrRemoved()}} 
and wait for 10secs.

> [SPS]: Optimize extended attributes for tracking SPS movements
> --
>
> Key: HDFS-12225
> URL: https://issues.apache.org/jira/browse/HDFS-12225
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-12225-HDFS-10285-01.patch, 
> HDFS-12225-HDFS-10285-02.patch
>
>
> We have discussed to optimize number extended attributes and asked to report 
> separate JIRA while implementing [HDFS-11150 | 
> https://issues.apache.org/jira/browse/HDFS-11150?focusedCommentId=15766127=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15766127]
> This is the JIRA to track that work 
> For the context, comment copied from HDFS-11150
> {quote}
> [~yuanbo] wrote : I've tried that before. There is an issue here if we only 
> mark the directory. When recovering from FsImage, the InodeMap isn't built 
> up, so we don't know the sub-inode of a given inode, in the end, We cannot 
> add these inodes to movement queue in FSDirectory#addToInodeMap, any 
> thoughts?{quote}
> {quote}
> [~umamaheswararao] wrote: I got what you are saying. Ok for simplicity we can 
> add for all Inodes now. For this to handle 100%, we may need intermittent 
> processing, like first we should add them to some intermittentList while 
> loading fsImage, once fully loaded and when starting active services, we 
> should process that list and do required stuff. But it would add some 
> additional complexity may be. Let's do with all file inodes now and we can 
> revisit later if it is really creating issues. How about you raise a JIRA for 
> it and think to optimize separately?
> {quote}
> {quote}
> [~andrew.wang] wrote in HDFS-10285 merge time review comment : HDFS-10899 
> also the cursor of the iterator in the EZ root xattr to track progress and 
> handle restarts. I wonder if we can do something similar here to avoid having 
> an xattr-per-file being moved.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12283) Ozone: DeleteKey-5: Implement SCM DeletedBlockLog

2017-08-17 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-12283:
--
Attachment: HDFS-12283-HDFS-7240.003.patch

[~cheersyang] Thanks for your reminder. upload v3 patch

> Ozone: DeleteKey-5: Implement SCM DeletedBlockLog
> -
>
> Key: HDFS-12283
> URL: https://issues.apache.org/jira/browse/HDFS-12283
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
> Attachments: HDFS-12283.001.patch, HDFS-12283-HDFS-7240.001.patch, 
> HDFS-12283-HDFS-7240.002.patch, HDFS-12283-HDFS-7240.003.patch
>
>
> The DeletedBlockLog is a persisted log in SCM to keep tracking container 
> blocks which are under deletion. It maintains info about under-deletion 
> container blocks that notified by KSM, and the state how it is processed. We 
> can use RocksDB to implement the 1st version of the log, the schema looks like
> ||TxID||ContainerName||Block List||ProcessedCount||
> |0|c1|b1,b2,b3|0|
> |1|c2|b1|3|
> |2|c2|b2, b3|-1|
> Some explanations
> # TxID is an incremental long value transaction ID for ONE container and 
> multiple blocks
> # Container name is the name of the container
> # Block list is a list of block IDs
> # ProcessedCount is the number of times SCM has sent this record to datanode, 
> it represents the "state" of the transaction, it is in range of \[-1, 5\], -1 
> means the transaction eventually failed after some retries, 5 is the max 
> number times of retries.
> We need to define {{DeletedBlockLog}} as an interface and implement this with 
> RocksDB {{MetadataStore}} as the first version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12216) Ozone: TestKeys is failing consistently

2017-08-17 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12216:
-
Summary: Ozone: TestKeys is failing consistently  (was: Ozone: TestKeys and 
TestKeysRatis are failing consistently)

> Ozone: TestKeys is failing consistently
> ---
>
> Key: HDFS-12216
> URL: https://issues.apache.org/jira/browse/HDFS-12216
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12216-HDFS-7240.001.patch
>
>
> TestKeys and TestKeysRatis are failing consistently as noted in test logs for 
> HDFS-12183
> TestKeysRatis is failing because of the following error
> {code}
> 2017-07-28 23:11:28,783 [StateMachineUpdater-127.0.0.1:55793] ERROR 
> impl.StateMachineUpdater (ExitUtils.java:terminate(80)) - Terminating with 
> exit status 2: StateMachineUpdater-127.0.0.1:55793: the StateMachineUpdater 
> hits Throwable
> org.iq80.leveldb.DBException: Closed
>   at org.fusesource.leveldbjni.internal.JniDB.put(JniDB.java:123)
>   at org.apache.hadoop.utils.LevelDBStore.put(LevelDBStore.java:98)
>   at 
> org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl.putKey(KeyManagerImpl.java:90)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.handlePutKey(Dispatcher.java:547)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.keyProcessHandler(Dispatcher.java:206)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.dispatch(Dispatcher.java:110)
>   at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatch(ContainerStateMachine.java:94)
>   at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:81)
>   at 
> org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:913)
>   at 
> org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:142)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> where as TestKeys is failing because of
> {code}
> 2017-07-28 23:14:20,889 [Thread-486] INFO  scm.XceiverClientManager 
> (XceiverClientManager.java:getClient(158)) - exception 
> java.util.concurrent.ExecutionException: java.net.ConnectException: 
> Connection refused: /127.0.0.1:55914
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12216) Ozone: TestKeys and TestKeysRatis are failing consistently

2017-08-17 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12216:
-
Status: Patch Available  (was: Open)

> Ozone: TestKeys and TestKeysRatis are failing consistently
> --
>
> Key: HDFS-12216
> URL: https://issues.apache.org/jira/browse/HDFS-12216
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12216-HDFS-7240.001.patch
>
>
> TestKeys and TestKeysRatis are failing consistently as noted in test logs for 
> HDFS-12183
> TestKeysRatis is failing because of the following error
> {code}
> 2017-07-28 23:11:28,783 [StateMachineUpdater-127.0.0.1:55793] ERROR 
> impl.StateMachineUpdater (ExitUtils.java:terminate(80)) - Terminating with 
> exit status 2: StateMachineUpdater-127.0.0.1:55793: the StateMachineUpdater 
> hits Throwable
> org.iq80.leveldb.DBException: Closed
>   at org.fusesource.leveldbjni.internal.JniDB.put(JniDB.java:123)
>   at org.apache.hadoop.utils.LevelDBStore.put(LevelDBStore.java:98)
>   at 
> org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl.putKey(KeyManagerImpl.java:90)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.handlePutKey(Dispatcher.java:547)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.keyProcessHandler(Dispatcher.java:206)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.dispatch(Dispatcher.java:110)
>   at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatch(ContainerStateMachine.java:94)
>   at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:81)
>   at 
> org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:913)
>   at 
> org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:142)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> where as TestKeys is failing because of
> {code}
> 2017-07-28 23:14:20,889 [Thread-486] INFO  scm.XceiverClientManager 
> (XceiverClientManager.java:getClient(158)) - exception 
> java.util.concurrent.ExecutionException: java.net.ConnectException: 
> Connection refused: /127.0.0.1:55914
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12216) Ozone: TestKeys and TestKeysRatis are failing consistently

2017-08-17 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12216:
-
Attachment: HDFS-12216-HDFS-7240.001.patch

> Ozone: TestKeys and TestKeysRatis are failing consistently
> --
>
> Key: HDFS-12216
> URL: https://issues.apache.org/jira/browse/HDFS-12216
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12216-HDFS-7240.001.patch
>
>
> TestKeys and TestKeysRatis are failing consistently as noted in test logs for 
> HDFS-12183
> TestKeysRatis is failing because of the following error
> {code}
> 2017-07-28 23:11:28,783 [StateMachineUpdater-127.0.0.1:55793] ERROR 
> impl.StateMachineUpdater (ExitUtils.java:terminate(80)) - Terminating with 
> exit status 2: StateMachineUpdater-127.0.0.1:55793: the StateMachineUpdater 
> hits Throwable
> org.iq80.leveldb.DBException: Closed
>   at org.fusesource.leveldbjni.internal.JniDB.put(JniDB.java:123)
>   at org.apache.hadoop.utils.LevelDBStore.put(LevelDBStore.java:98)
>   at 
> org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl.putKey(KeyManagerImpl.java:90)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.handlePutKey(Dispatcher.java:547)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.keyProcessHandler(Dispatcher.java:206)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.dispatch(Dispatcher.java:110)
>   at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatch(ContainerStateMachine.java:94)
>   at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:81)
>   at 
> org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:913)
>   at 
> org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:142)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> where as TestKeys is failing because of
> {code}
> 2017-07-28 23:14:20,889 [Thread-486] INFO  scm.XceiverClientManager 
> (XceiverClientManager.java:getClient(158)) - exception 
> java.util.concurrent.ExecutionException: java.net.ConnectException: 
> Connection refused: /127.0.0.1:55914
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12248) SNN will not upload fsimage on IOE and Interrupted exceptions

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130232#comment-16130232
 ] 

Hadoop QA commented on HDFS-12248:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 55 unchanged - 1 fixed = 58 total (was 56) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12248 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882301/HDFS-12248-003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4b271a614c13 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1f04cb4 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20737/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20737/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20737/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20737/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> SNN will not upload fsimage on IOE and Interrupted exceptions
> 

[jira] [Commented] (HDFS-12254) Upgrade JUnit from 4 to 5 in hadoop-hdfs

2017-08-17 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130160#comment-16130160
 ] 

Akira Ajisaka commented on HDFS-12254:
--

+1 for the option 2. Thanks.

> Upgrade JUnit from 4 to 5 in hadoop-hdfs
> 
>
> Key: HDFS-12254
> URL: https://issues.apache.org/jira/browse/HDFS-12254
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Ajay Kumar
>
> Feel free to create sub-tasks for each module.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >