[jira] [Commented] (HDDS-848) Create SCM metrics related to container state

2018-12-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16706783#comment-16706783
 ] 

Hadoop QA commented on HDDS-848:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 31s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.container.common.helpers.TestBlockData |
|   | hadoop.ozone.container.TestContainerReplication |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.container.common.TestBlockDeletingService |
|   | hadoop.ozone.scm.TestContainerSmallFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-848 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-14118) RBF: Use DNS to help resolve routers

2018-12-02 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16706699#comment-16706699
 ] 

Surendra Singh Lilhore commented on HDFS-14118:
---

{quote}Clients will need to know about routers to talk to the HDFS cluster 
(obviously), and having routers updating (adding/removing) will have to make 
every client change, which is a painful process.
{quote}
[~linyiqun],  HDFS-14079 is for router admin RPC, but I think [~fengnanli] is 
talking about normal file system RPC for router.

 
{quote}Another way is to have the client do the normal connecting/failover 
after they get the list of routers
{quote}
[~fengnanli], currenlty it is doing the same right ?

 

> RBF: Use DNS to help resolve routers
> 
>
> Key: HDFS-14118
> URL: https://issues.apache.org/jira/browse/HDFS-14118
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
>
> Clients will need to know about routers to talk to the HDFS cluster 
> (obviously), and having routers updating (adding/removing) will have to make 
> every client change, which is a painful process.
> DNS can be used here to resolve the single domain name clients knows to a 
> list of routers in the current config. However, DNS won't be able to consider 
> only resolving to the working router based on certain health thresholds.
> There are some ways about how this can be solved. One way is to have a 
> separate script to regularly check the status of the router and update the 
> DNS records if a router fails the health thresholds. In this way, security 
> might be carefully considered for this way. Another way is to have the client 
> do the normal connecting/failover after they get the list of routers, which 
> requires the change of current failover proxy provider.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-848) Create SCM metrics related to container state

2018-12-02 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-848:

Attachment: HDDS-848.02.patch

> Create SCM metrics related to container state
> -
>
> Key: HDDS-848
> URL: https://issues.apache.org/jira/browse/HDDS-848
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-848.00.patch, HDDS-848.01.patch, HDDS-848.02.patch
>
>
> 1.Number of containers (open, closing, closed)
> And also see if any other metrics can be added which will be useful.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-848) Create SCM metrics related to container state

2018-12-02 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16706684#comment-16706684
 ] 

Bharat Viswanadham commented on HDDS-848:
-

Thank You [~linyiqun] for the review.

I have addressed review comments and check style issues in patch v02.

> Create SCM metrics related to container state
> -
>
> Key: HDDS-848
> URL: https://issues.apache.org/jira/browse/HDDS-848
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-848.00.patch, HDDS-848.01.patch, HDDS-848.02.patch
>
>
> 1.Number of containers (open, closing, closed)
> And also see if any other metrics can be added which will be useful.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14085) RBF: LS command for root shows wrong owner and permission information.

2018-12-02 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16706669#comment-16706669
 ] 

Surendra Singh Lilhore commented on HDFS-14085:
---

+1

> RBF: LS command for root shows wrong owner and permission information.
> --
>
> Key: HDFS-14085
> URL: https://issues.apache.org/jira/browse/HDFS-14085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14085-HDFS-13891-01.patch, 
> HDFS-14085-HDFS-13891-02.patch, HDFS-14085-HDFS-13891-03.patch, 
> HDFS-14085-HDFS-13891-04.patch, HDFS-14085-HDFS-13891-05.patch, 
> HDFS-14085-HDFS-13891-06.patch, HDFS-14085-HDFS-13891-07.patch, 
> HDFS-14085-HDFS-13891-08.patch, HDFS-14085-HDFS-13891-09.patch
>
>
> The LS command for / lists all the mount entries but the permission displayed 
> is the default permission (777) and the owner and group info same as that of 
> the user calling it; Which actually should be the same as that of the 
> destination of the mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-99) Adding SCM Audit log

2018-12-02 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-99?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16706660#comment-16706660
 ] 

Dinesh Chitlangia commented on HDDS-99:
---

Quick update.

I have finished implementation for SCMBlockProtocolServer and 
SCMDatanodeProtocolServer.

Started SCMClientProtocolServer and will be posting a patch soon.

> Adding SCM Audit log
> 
>
> Key: HDDS-99
> URL: https://issues.apache.org/jira/browse/HDDS-99
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
>
> This ticket is opened to add SCM audit log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14113) EC : Add Configuration to restrict UserDefined Policies

2018-12-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16706644#comment-16706644
 ] 

Hadoop QA commented on HDFS-14113:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 6 new + 453 unchanged - 0 fixed = 459 total (was 453) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14113 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950327/HDFS-14113-01.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 2d3d5da13566 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 042c8ef |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25693/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25693/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test 

[jira] [Comment Edited] (HDFS-14118) RBF: Use DNS to help resolve routers

2018-12-02 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16706635#comment-16706635
 ] 

Yiqun Lin edited comment on HDFS-14118 at 12/3/18 3:52 AM:
---

{quote}
Another way is to have the client do the normal connecting/failover after they 
get the list of routers, which requires the change of current failover proxy 
provider.
{quote}
[~fengnanli], I suppose HDFS-14079 will address this way.


was (Author: linyiqun):
{quote}
Another way is to have the client do the normal connecting/failover after they 
get the list of routers, which requires the change of current failover proxy 
provider.
Attachments
{quote}
[~fengnanli], I suppose HDFS-14079 will address this way.

> RBF: Use DNS to help resolve routers
> 
>
> Key: HDFS-14118
> URL: https://issues.apache.org/jira/browse/HDFS-14118
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
>
> Clients will need to know about routers to talk to the HDFS cluster 
> (obviously), and having routers updating (adding/removing) will have to make 
> every client change, which is a painful process.
> DNS can be used here to resolve the single domain name clients knows to a 
> list of routers in the current config. However, DNS won't be able to consider 
> only resolving to the working router based on certain health thresholds.
> There are some ways about how this can be solved. One way is to have a 
> separate script to regularly check the status of the router and update the 
> DNS records if a router fails the health thresholds. In this way, security 
> might be carefully considered for this way. Another way is to have the client 
> do the normal connecting/failover after they get the list of routers, which 
> requires the change of current failover proxy provider.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14118) RBF: Use DNS to help resolve routers

2018-12-02 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16706635#comment-16706635
 ] 

Yiqun Lin commented on HDFS-14118:
--

{quote}
Another way is to have the client do the normal connecting/failover after they 
get the list of routers, which requires the change of current failover proxy 
provider.
Attachments
{quote}
[~fengnanli], I suppose HDFS-14079 will address this way.

> RBF: Use DNS to help resolve routers
> 
>
> Key: HDFS-14118
> URL: https://issues.apache.org/jira/browse/HDFS-14118
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
>
> Clients will need to know about routers to talk to the HDFS cluster 
> (obviously), and having routers updating (adding/removing) will have to make 
> every client change, which is a painful process.
> DNS can be used here to resolve the single domain name clients knows to a 
> list of routers in the current config. However, DNS won't be able to consider 
> only resolving to the working router based on certain health thresholds.
> There are some ways about how this can be solved. One way is to have a 
> separate script to regularly check the status of the router and update the 
> DNS records if a router fails the health thresholds. In this way, security 
> might be carefully considered for this way. Another way is to have the client 
> do the normal connecting/failover after they get the list of routers, which 
> requires the change of current failover proxy provider.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-848) Create SCM metrics related to container state

2018-12-02 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16706631#comment-16706631
 ] 

Yiqun Lin commented on HDDS-848:


The patch almost looks good, [~bharatviswa]. Some minor comments:

*StorageContainerManager.java*
 Can we add the tag of {{@Override}} for {{getContainerStateCount}}?

*TestSCMMXBean.java*
 Line144, Line150: Can we format these lines? Looks like missing some white 
spaces.
 Line138,Line166: Can we add an additional check for expected container state 
from the map key before we do {{verifyEquals}} function?
 BTW, fix checkstyle issues.

> Create SCM metrics related to container state
> -
>
> Key: HDDS-848
> URL: https://issues.apache.org/jira/browse/HDDS-848
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-848.00.patch, HDDS-848.01.patch
>
>
> 1.Number of containers (open, closing, closed)
> And also see if any other metrics can be added which will be useful.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11039) Expose more configuration properties to hdfs-default.xml

2018-12-02 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16706620#comment-16706620
 ] 

Dinesh Chitlangia commented on HDFS-11039:
--

[~hitliuyi] could you please add more details about what configuration 
properties are you proposing to add to hdfs-default.xml ?

> Expose more configuration properties to hdfs-default.xml
> 
>
> Key: HDFS-11039
> URL: https://issues.apache.org/jira/browse/HDFS-11039
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, newbie
>Reporter: Yi Liu
>Assignee: Jennica Pounds
>Priority: Minor
>
> There are some configuration properties for hdfs, but have not been exposed 
> in hdfs-default.xml.
> It's convenient for Hadoop user/admin if we add them in the hdfs-default.xml.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14113) EC : Add Configuration to restrict UserDefined Policies

2018-12-02 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14113:

Status: Patch Available  (was: Open)

> EC : Add Configuration to restrict UserDefined Policies
> ---
>
> Key: HDFS-14113
> URL: https://issues.apache.org/jira/browse/HDFS-14113
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14113-01.patch
>
>
> By default addition of erasure coding policies is enabled for users.We need 
> to add configuration whether to allow addition of new User Defined policies 
> or not.Which can be configured in for of a Boolean value at the server side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14113) EC : Add Configuration to restrict UserDefined Policies

2018-12-02 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14113:

Attachment: HDFS-14113-01.patch

> EC : Add Configuration to restrict UserDefined Policies
> ---
>
> Key: HDFS-14113
> URL: https://issues.apache.org/jira/browse/HDFS-14113
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14113-01.patch
>
>
> By default addition of erasure coding policies is enabled for users.We need 
> to add configuration whether to allow addition of new User Defined policies 
> or not.Which can be configured in for of a Boolean value at the server side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-890) Handle OverlappingFileLockException during writeStateMachineData in ContainerStateMachine

2018-12-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16706467#comment-16706467
 ] 

Hadoop QA commented on HDDS-890:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 44s{color} | {color:orange} root: The patch generated 2 new + 0 unchanged - 
0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
7s{color} | {color:red} hadoop-hdds/container-service generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m  6s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 30s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/container-service |
|  |  Redundant nullcheck of dispatcherContext, which is known to be non-null 
in 
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handlePutSmallFile(ContainerProtos$ContainerCommandRequestProto,
 KeyValueContainer, DispatcherContext)  Redundant null check at 
KeyValueHandler.java:is known to be non-null in 

[jira] [Updated] (HDDS-890) Handle OverlappingFileLockException during writeStateMachineData in ContainerStateMachine

2018-12-02 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-890:
---
Status: Patch Available  (was: Open)

> Handle  OverlappingFileLockException during writeStateMachineData in 
> ContainerStateMachine 
> ---
>
> Key: HDDS-890
> URL: https://issues.apache.org/jira/browse/HDDS-890
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-890.000.patch
>
>
> With raft log entries getting truncated, the corresponding 
> writeStateMachineData futures doing actual chunk writes can not aborted. The 
> next subsequent log entry corresponding to the same client request may end up 
> writing the same chunk file concurrently hitting 
> OverlappingFileLockException. This Jira aims to address this issue by 
> appending logIndex and term to every tmp chunkFile name created during 
> writeStateMachineData call,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14075) Terminate the namenode when failed to start log segment

2018-12-02 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16706423#comment-16706423
 ] 

Hudson commented on HDFS-14075:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15545 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15545/])
HDFS-14075. Terminate the namenode when failed to start log segment. 
(surendralilhore: rev 042c8ef593ced1915a688e99aa9a6a52fdf66734)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/JournalSet.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestNNWithQJM.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLogJournalFailures.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java


> Terminate the namenode when failed to start log segment
> ---
>
> Key: HDFS-14075
> URL: https://issues.apache.org/jira/browse/HDFS-14075
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Fix For: 3.3.0
>
> Attachments: HDFS-14075-01.patch, HDFS-14075-02.patch, 
> HDFS-14075-03.patch, HDFS-14075-04.patch, HDFS-14075-04.patch, 
> HDFS-14075-04.patch, HDFS-14075-05.patch, HDFS-14075-06.patch, 
> HDFS-14075-07.patch
>
>
> {noformat}
> 2018-11-10 18:59:38,427 FATAL 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Exception while edit 
> logging: null
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:481)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:288)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:232)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-11-10 18:59:38,532 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Exception while edit logging: null
> 2018-11-10 18:59:38,552 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG:
> {noformat}
> Before NPE Received the following Exception
> {noformat}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 65110, call 
> Call#23241 Retry#0 
> org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 
> 
> java.io.IOException: Unable to start log segment 7964819: too few journals 
> successfully started.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegmentAndWriteHeaderTxn(FSEditLog.java:1395)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1319)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1352)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:4669)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1293)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:146)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12974)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> Caused by: java.io.IOException: starting log segment 7964819 failed for too 
> many journals
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:412)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:207)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1383)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Updated] (HDFS-14075) Terminate the namenode when failed to start log segment

2018-12-02 Thread Surendra Singh Lilhore (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-14075:
--
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thanks [~ayushtkn] for contribution. Thanks [~vinayrpet], [~trjianjianjiao] for 
review..

> Terminate the namenode when failed to start log segment
> ---
>
> Key: HDFS-14075
> URL: https://issues.apache.org/jira/browse/HDFS-14075
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Fix For: 3.3.0
>
> Attachments: HDFS-14075-01.patch, HDFS-14075-02.patch, 
> HDFS-14075-03.patch, HDFS-14075-04.patch, HDFS-14075-04.patch, 
> HDFS-14075-04.patch, HDFS-14075-05.patch, HDFS-14075-06.patch, 
> HDFS-14075-07.patch
>
>
> {noformat}
> 2018-11-10 18:59:38,427 FATAL 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Exception while edit 
> logging: null
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:481)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:288)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:232)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-11-10 18:59:38,532 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Exception while edit logging: null
> 2018-11-10 18:59:38,552 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG:
> {noformat}
> Before NPE Received the following Exception
> {noformat}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 65110, call 
> Call#23241 Retry#0 
> org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 
> 
> java.io.IOException: Unable to start log segment 7964819: too few journals 
> successfully started.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegmentAndWriteHeaderTxn(FSEditLog.java:1395)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1319)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1352)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:4669)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1293)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:146)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12974)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> Caused by: java.io.IOException: starting log segment 7964819 failed for too 
> many journals
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:412)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:207)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1383)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14104) Review getImageTxIdToRetain

2018-12-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16706374#comment-16706374
 ] 

Hadoop QA commented on HDFS-14104:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 5 unchanged - 1 fixed = 5 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14104 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950313/HDFS-14104.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b7674d862d86 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8f3e12f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25692/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25692/testReport/ |
| Max. process+thread count 

[jira] [Commented] (HDFS-14107) FileContext Delete on Exit Improvements

2018-12-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16706360#comment-16706360
 ] 

Hadoop QA commented on HDFS-14107:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 54 unchanged - 3 fixed = 54 total (was 57) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 39s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestFileContextDeleteOnExit |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14107 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950312/HADOOP-14107.2.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c67d457c9e0c 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8f3e12f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25691/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25691/testReport/ |
| Max. process+thread count | 1464 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 

[jira] [Updated] (HDFS-14107) FileContext Delete on Exit Improvements

2018-12-02 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14107:
---
Status: Patch Available  (was: Open)

> FileContext Delete on Exit Improvements
> ---
>
> Key: HDFS-14107
> URL: https://issues.apache.org/jira/browse/HDFS-14107
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-14107.2.patch, HDFS-14107.1.patch
>
>
> {code:java|FileContext.java}
> synchronized (DELETE_ON_EXIT) {
>   Set>> set = DELETE_ON_EXIT.entrySet();
>   for (Entry> entry : set) {
> FileContext fc = entry.getKey();
> Set paths = entry.getValue();
> for (Path path : paths) {
>   try {
> fc.delete(path, true);
>   } catch (IOException e) {
> LOG.warn("Ignoring failure to deleteOnExit for path " + path);
>   }
> }
>   }
>   DELETE_ON_EXIT.clear();
> {code}
> # Include the {{IOException}} in the logging so that admins can know why the 
> file was not deleted
> # Do not bother clearing out the data structure.  This code is only called if 
> the JVM is going down.  Better to spend the time allowing another shutdown 
> hook to run than to spend time cleaning this thing up.
> # Use Guava {{MultiMap}} for readability
> # Paths are currently stored in a {{TreeSet}}.  This set implementation 
> orders the files by names.  It does not seem worth much to order the files.  
> Use a faster {{HashSet}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14104) Review getImageTxIdToRetain

2018-12-02 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14104:
---
Attachment: HDFS-14104.1.patch

> Review getImageTxIdToRetain
> ---
>
> Key: HDFS-14104
> URL: https://issues.apache.org/jira/browse/HDFS-14104
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14104.1.patch, HDFS-14104.1.patch, 
> HDFS-14104.1.patch
>
>
> {code:java|title=NNStorageRetentionManager.java}
>   private long getImageTxIdToRetain(FSImageTransactionalStorageInspector 
> inspector) {
>   
> List images = inspector.getFoundImages();
> TreeSet imageTxIds = Sets.newTreeSet();
> for (FSImageFile image : images) {
>   imageTxIds.add(image.getCheckpointTxId());
> }
> 
> List imageTxIdsList = Lists.newArrayList(imageTxIds);
> if (imageTxIdsList.isEmpty()) {
>   return 0;
> }
> 
> Collections.reverse(imageTxIdsList);
> int toRetain = Math.min(numCheckpointsToRetain, imageTxIdsList.size());   
>  
> long minTxId = imageTxIdsList.get(toRetain - 1);
> LOG.info("Going to retain " + toRetain + " images with txid >= " +
> minTxId);
> return minTxId;
>   }
> {code}
> # Fix check style issues
> # Use SLF4J paramaterized logging
> # A lot of work gets done before checking if the list actually contains any 
> entries and returning a 0.  That should be the first thing that happens
> # Instead of building up the {{TreeSet}} in its natural order, then reversing 
> the collection, simply use a reverse natural ordering to begin with and save 
> a step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14104) Review getImageTxIdToRetain

2018-12-02 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14104:
---
Status: Patch Available  (was: Open)

Submitting patch again.  Two completely different unit tests are now failing 
when compared to the first run.

> Review getImageTxIdToRetain
> ---
>
> Key: HDFS-14104
> URL: https://issues.apache.org/jira/browse/HDFS-14104
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14104.1.patch, HDFS-14104.1.patch, 
> HDFS-14104.1.patch
>
>
> {code:java|title=NNStorageRetentionManager.java}
>   private long getImageTxIdToRetain(FSImageTransactionalStorageInspector 
> inspector) {
>   
> List images = inspector.getFoundImages();
> TreeSet imageTxIds = Sets.newTreeSet();
> for (FSImageFile image : images) {
>   imageTxIds.add(image.getCheckpointTxId());
> }
> 
> List imageTxIdsList = Lists.newArrayList(imageTxIds);
> if (imageTxIdsList.isEmpty()) {
>   return 0;
> }
> 
> Collections.reverse(imageTxIdsList);
> int toRetain = Math.min(numCheckpointsToRetain, imageTxIdsList.size());   
>  
> long minTxId = imageTxIdsList.get(toRetain - 1);
> LOG.info("Going to retain " + toRetain + " images with txid >= " +
> minTxId);
> return minTxId;
>   }
> {code}
> # Fix check style issues
> # Use SLF4J paramaterized logging
> # A lot of work gets done before checking if the list actually contains any 
> entries and returning a 0.  That should be the first thing that happens
> # Instead of building up the {{TreeSet}} in its natural order, then reversing 
> the collection, simply use a reverse natural ordering to begin with and save 
> a step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14104) Review getImageTxIdToRetain

2018-12-02 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14104:
---
Status: Open  (was: Patch Available)

> Review getImageTxIdToRetain
> ---
>
> Key: HDFS-14104
> URL: https://issues.apache.org/jira/browse/HDFS-14104
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14104.1.patch, HDFS-14104.1.patch, 
> HDFS-14104.1.patch
>
>
> {code:java|title=NNStorageRetentionManager.java}
>   private long getImageTxIdToRetain(FSImageTransactionalStorageInspector 
> inspector) {
>   
> List images = inspector.getFoundImages();
> TreeSet imageTxIds = Sets.newTreeSet();
> for (FSImageFile image : images) {
>   imageTxIds.add(image.getCheckpointTxId());
> }
> 
> List imageTxIdsList = Lists.newArrayList(imageTxIds);
> if (imageTxIdsList.isEmpty()) {
>   return 0;
> }
> 
> Collections.reverse(imageTxIdsList);
> int toRetain = Math.min(numCheckpointsToRetain, imageTxIdsList.size());   
>  
> long minTxId = imageTxIdsList.get(toRetain - 1);
> LOG.info("Going to retain " + toRetain + " images with txid >= " +
> minTxId);
> return minTxId;
>   }
> {code}
> # Fix check style issues
> # Use SLF4J paramaterized logging
> # A lot of work gets done before checking if the list actually contains any 
> entries and returning a 0.  That should be the first thing that happens
> # Instead of building up the {{TreeSet}} in its natural order, then reversing 
> the collection, simply use a reverse natural ordering to begin with and save 
> a step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14107) FileContext Delete on Exit Improvements

2018-12-02 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14107:
---
Status: Open  (was: Patch Available)

> FileContext Delete on Exit Improvements
> ---
>
> Key: HDFS-14107
> URL: https://issues.apache.org/jira/browse/HDFS-14107
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-14107.2.patch, HDFS-14107.1.patch
>
>
> {code:java|FileContext.java}
> synchronized (DELETE_ON_EXIT) {
>   Set>> set = DELETE_ON_EXIT.entrySet();
>   for (Entry> entry : set) {
> FileContext fc = entry.getKey();
> Set paths = entry.getValue();
> for (Path path : paths) {
>   try {
> fc.delete(path, true);
>   } catch (IOException e) {
> LOG.warn("Ignoring failure to deleteOnExit for path " + path);
>   }
> }
>   }
>   DELETE_ON_EXIT.clear();
> {code}
> # Include the {{IOException}} in the logging so that admins can know why the 
> file was not deleted
> # Do not bother clearing out the data structure.  This code is only called if 
> the JVM is going down.  Better to spend the time allowing another shutdown 
> hook to run than to spend time cleaning this thing up.
> # Use Guava {{MultiMap}} for readability
> # Paths are currently stored in a {{TreeSet}}.  This set implementation 
> orders the files by names.  It does not seem worth much to order the files.  
> Use a faster {{HashSet}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14107) FileContext Delete on Exit Improvements

2018-12-02 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14107:
---
Attachment: HADOOP-14107.2.patch

> FileContext Delete on Exit Improvements
> ---
>
> Key: HDFS-14107
> URL: https://issues.apache.org/jira/browse/HDFS-14107
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-14107.2.patch, HDFS-14107.1.patch
>
>
> {code:java|FileContext.java}
> synchronized (DELETE_ON_EXIT) {
>   Set>> set = DELETE_ON_EXIT.entrySet();
>   for (Entry> entry : set) {
> FileContext fc = entry.getKey();
> Set paths = entry.getValue();
> for (Path path : paths) {
>   try {
> fc.delete(path, true);
>   } catch (IOException e) {
> LOG.warn("Ignoring failure to deleteOnExit for path " + path);
>   }
> }
>   }
>   DELETE_ON_EXIT.clear();
> {code}
> # Include the {{IOException}} in the logging so that admins can know why the 
> file was not deleted
> # Do not bother clearing out the data structure.  This code is only called if 
> the JVM is going down.  Better to spend the time allowing another shutdown 
> hook to run than to spend time cleaning this thing up.
> # Use Guava {{MultiMap}} for readability
> # Paths are currently stored in a {{TreeSet}}.  This set implementation 
> orders the files by names.  It does not seem worth much to order the files.  
> Use a faster {{HashSet}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14119) GreedyPlanner Parameter Logging

2018-12-02 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16706284#comment-16706284
 ] 

BELUGA BEHR commented on HDFS-14119:


{code}
 org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
{code}

These unit tests are failing regularly.  Unrelated to patch.  Please consider 
for inclusion into project.

> GreedyPlanner Parameter Logging
> ---
>
> Key: HDFS-14119
> URL: https://issues.apache.org/jira/browse/HDFS-14119
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.3.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: HDFS-14119.1.patch
>
>
> 1. Do not use {{String.format()}} in conjunction with SLF4J.  Superfluous.
> {code:java}
> String message = String
> .format("Compute Plan for Node : %s:%d took %d ms ",
> node.getDataNodeName(), node.getDataNodePort(),
> endTime - startTime);
> LOG.info(message);{code}
> 2. Do not call an explicit toString() on an object with SLF4J parameter. 
> Otherwise, the string will be created and then thrown away if the logger is 
> not set to debug level.  Just pass the object itself and the framework will 
> call {{toString}} if needed.
> {code}
> LOG.debug("Step : {} ",  nextStep.toString());
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14075) Terminate the namenode when failed to start log segment

2018-12-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16706151#comment-16706151
 ] 

Hadoop QA commented on HDFS-14075:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}133m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14075 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949507/HDFS-14075-07.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4f887f4f2179 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8f3e12f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25690/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25690/testReport/ |
| Max. process+thread count | 3985 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25690/console |
| Powered 

[jira] [Commented] (HDDS-882) Provide a config to optionally turn on/off the sync flag during chunk writes

2018-12-02 Thread alex fu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16706142#comment-16706142
 ] 

alex fu commented on HDDS-882:
--

hello . Hudson, I have installed kafa cluster with ambari, but when I update 
kafka configuration on ambari ui, ambari can not update new version 
configuration to kafka node.
kafka server.properties configuration can update successfully, but when I 
change kafka jaas config (include kafka_jaas_config and 
kafka_client_jaas_config), it can't overwrite exists jaas config on kafka node.
could you also tell me which script run the configuration compare, which if is 
not match, it will overwrite the local configuration. I just found out the 
RecoveryManager.py and ClusterConfig.py and AmbariConfig.py, but all of them I 
can't see the log info what is "writing file configurationfile , because 
content don't match". 
any help ideas ?? it is so urgent . thanks very much.

> Provide a config to optionally turn on/off the sync flag during chunk writes
> 
>
> Key: HDDS-882
> URL: https://issues.apache.org/jira/browse/HDDS-882
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Minor
> Fix For: 0.4.0
>
> Attachments: HDDS-882.000.patch, HDDS-882.001.patch
>
>
> Currently, chunk writes happen with sync flag on. We should add a config to 
> enable/disable this for performance benchmarks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org