[jira] [Updated] (HADOOP-16447) Upgrade JUnit5 from 5.3.1 to 5.5+ to support global timeout

2019-07-24 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HADOOP-16447:
--
Attachment: HADOOP-16447.001.patch

> Upgrade JUnit5 from 5.3.1 to 5.5+ to support global timeout
> ---
>
> Key: HADOOP-16447
> URL: https://issues.apache.org/jira/browse/HADOOP-16447
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: kevin su
>Priority: Major
> Attachments: HADOOP-16447.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1155: HDDS-1842. Implement S3 Abort MPU request to use Cache and DoubleBuffer.

2019-07-24 Thread GitBox
bharatviswa504 commented on issue #1155: HDDS-1842. Implement S3 Abort MPU 
request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1155#issuecomment-514873288
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16459) Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the RPC layer" to branch-2

2019-07-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892363#comment-16892363
 ] 

Hadoop QA commented on HADOOP-16459:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
41s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
13s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
23s{color} | {color:green} branch-2 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
10s{color} | {color:green} branch-2 passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
41s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
39s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 53s{color} | {color:orange} root: The patch generated 6 new + 345 unchanged 
- 7 fixed = 351 total (was 352) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
12s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 24s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}197m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | 

[GitHub] [hadoop] hadoop-yetus commented on issue #1033: HDDS-1391 : Add ability in OM to serve delta updates through an API.

2019-07-24 Thread GitBox
hadoop-yetus commented on issue #1033: HDDS-1391 : Add ability in OM to serve 
delta updates through an API.
URL: https://github.com/apache/hadoop/pull/1033#issuecomment-514871582
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 69 | Maven dependency ordering for branch |
   | +1 | mvninstall | 609 | trunk passed |
   | +1 | compile | 359 | trunk passed |
   | +1 | checkstyle | 75 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 861 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | trunk passed |
   | 0 | spotbugs | 427 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 617 | trunk passed |
   | -0 | patch | 460 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | +1 | mvninstall | 566 | the patch passed |
   | +1 | compile | 371 | the patch passed |
   | +1 | cc | 371 | the patch passed |
   | +1 | javac | 371 | the patch passed |
   | +1 | checkstyle | 72 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 654 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 69 | hadoop-hdds generated 1 new + 15 unchanged - 0 fixed = 
16 total (was 15) |
   | +1 | findbugs | 633 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 277 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2023 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 7765 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1033/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1033 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux f5771a69a0a5 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9b8b3ac |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1033/7/artifact/out/diff-javadoc-javadoc-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1033/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1033/7/testReport/ |
   | Max. process+thread count | 4311 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1033/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16451) Update jackson-databind to 2.9.9.1

2019-07-24 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892346#comment-16892346
 ] 

Duo Zhang commented on HADOOP-16451:


My 2 cents:

1. It is fine to offically say that a release line is EOL. For HBase we will 
drop the support for legacy releases while releasing a new minor release.
2. It is still more friendly to make more 2.8.x and 2.7.x releases due to CVEs, 
so for the current release lines, we could still benefit, as in general, at 
least for HBase, we can not drop an entire hadoop release line support in a new 
patch release, i.e, if 2.2.0 has the support for 2.8.x, if we make a new 2.8.6 
release, we will drop the support for 2.8.[1-5], due to the CVEs, and only 
support 2.8.6. But if we do not make any new releases for 2.8.x, we probably 
could only stay on 2.8.5 then...

Thanks.

> Update jackson-databind to 2.9.9.1
> --
>
> Key: HADOOP-16451
> URL: https://issues.apache.org/jira/browse/HADOOP-16451
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16451.001.patch, HADOOP-16451.002.patch
>
>
> https://nvd.nist.gov/vuln/detail/CVE-2019-12814
> CVE-2019-12814 flags 2.9.9 as vulnerable. A new version 2.9.9.1 is available.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16457) Hadoop does not work with Kerberos config in hdfs-site.xml for simple security

2019-07-24 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph reassigned HADOOP-16457:
--

Assignee: Prabhu Joseph

> Hadoop does not work with Kerberos config in hdfs-site.xml for simple security
> --
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Minor
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16457) Hadoop does not work with Kerberos config in hdfs-site.xml for simple security

2019-07-24 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892338#comment-16892338
 ] 

Prabhu Joseph commented on HADOOP-16457:


[~eyang] I will work on this, assigning to me.

> Hadoop does not work with Kerberos config in hdfs-site.xml for simple security
> --
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Priority: Minor
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16451) Update jackson-databind to 2.9.9.1

2019-07-24 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892337#comment-16892337
 ] 

Wei-Chiu Chuang commented on HADOOP-16451:
--

Good point [~Apache9]
We haven't made any 2.8.x releases for more than a year (last one in May 2018). 
I thought we had abandoned this release line, but during the last meetup it 
sounds like most community folks are still on 2.8 or even 2.7.

So I feel like we probably need at least one more 2.8 release. Thoughts?

> Update jackson-databind to 2.9.9.1
> --
>
> Key: HADOOP-16451
> URL: https://issues.apache.org/jira/browse/HADOOP-16451
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16451.001.patch, HADOOP-16451.002.patch
>
>
> https://nvd.nist.gov/vuln/detail/CVE-2019-12814
> CVE-2019-12814 flags 2.9.9 as vulnerable. A new version 2.9.9.1 is available.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16435) RpcMetrics should not be retained forever

2019-07-24 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892330#comment-16892330
 ] 

lqjacklee commented on HADOOP-16435:


[~kgyrtkirk] I mean that the session is end is not identity that the server is 
end . 

> RpcMetrics should not be retained forever
> -
>
> Key: HADOOP-16435
> URL: https://issues.apache.org/jira/browse/HADOOP-16435
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: rpc-server
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Critical
> Attachments: HADOOP-16435.01.patch, classes.png, 
> defaultMetricsHoldsRpcMetrics.png, related.jxray.png, rpcm.hprof.xz
>
>
> * RpcMetrics objects are registered into 
> [defaultmetricssystem|https://github.com/apache/hadoop/blob/85d9111a88f94a5e6833cd142272be2c5823e922/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java#L101]
> * although there is a shutdown() call (which is actually invoked) it doesn't 
> unregisters itself from the 
> [metricssystem|https://github.com/apache/hadoop/blob/85d9111a88f94a5e6833cd142272be2c5823e922/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java#L185]
> * RpcDetailedMetrics has the same issue
> background
> * hiveserver2 slowly eats up memory when running simple queries in new 
> sessions (select 1)
> * every session opens a tezsession
> * tezsession has rpcmetrics
> * with a 150M heap after around 30 session the jvm gets outofmemmory



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16451) Update jackson-databind to 2.9.9.1

2019-07-24 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892328#comment-16892328
 ] 

Duo Zhang commented on HADOOP-16451:


So what's the plan for the 2.8.x releases? HBase is currently still on hadoop 
2.8.x, and I see that the transitive dependencies for jackson are still 1.8.x 
or 1.9.x... 

> Update jackson-databind to 2.9.9.1
> --
>
> Key: HADOOP-16451
> URL: https://issues.apache.org/jira/browse/HADOOP-16451
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16451.001.patch, HADOOP-16451.002.patch
>
>
> https://nvd.nist.gov/vuln/detail/CVE-2019-12814
> CVE-2019-12814 flags 2.9.9 as vulnerable. A new version 2.9.9.1 is available.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16451) Update jackson-databind to 2.9.9.1

2019-07-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892318#comment-16892318
 ] 

Hudson commented on HADOOP-16451:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16981 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16981/])
HADOOP-16451. Update jackson-databind to 2.9.9.1. Contributed by Siyao 
(weichiu: rev 9b8b3acb0a2b87356056c23f3d0f30a97a38cd3d)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml
* (edit) hadoop-project/pom.xml


> Update jackson-databind to 2.9.9.1
> --
>
> Key: HADOOP-16451
> URL: https://issues.apache.org/jira/browse/HADOOP-16451
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16451.001.patch, HADOOP-16451.002.patch
>
>
> https://nvd.nist.gov/vuln/detail/CVE-2019-12814
> CVE-2019-12814 flags 2.9.9 as vulnerable. A new version 2.9.9.1 is available.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1155: HDDS-1842. Implement S3 Abort MPU request to use Cache and DoubleBuffer.

2019-07-24 Thread GitBox
hadoop-yetus commented on issue #1155: HDDS-1842. Implement S3 Abort MPU 
request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1155#issuecomment-514849947
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 162 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 7 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 52 | Maven dependency ordering for branch |
   | +1 | mvninstall | 847 | trunk passed |
   | +1 | compile | 450 | trunk passed |
   | +1 | checkstyle | 91 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1182 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 212 | trunk passed |
   | 0 | spotbugs | 515 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 761 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 34 | Maven dependency ordering for patch |
   | +1 | mvninstall | 719 | the patch passed |
   | +1 | compile | 460 | the patch passed |
   | +1 | javac | 460 | the patch passed |
   | +1 | checkstyle | 99 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 880 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 217 | the patch passed |
   | +1 | findbugs | 851 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 367 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2810 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 10449 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1155/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1155 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 595b18f9838e 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1d98a21 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1155/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1155/1/testReport/ |
   | Max. process+thread count | 3591 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1155/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16451) Update jackson-databind to 2.9.9.1

2019-07-24 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892302#comment-16892302
 ] 

Wei-Chiu Chuang commented on HADOOP-16451:
--

Pushed to trunk. Thanks [~smeng] for the patch and [~aajisaka] for the review.

 

Since this fixes a security vulnerability, should we cherry pick the commit 
into lower releases? I typically don't do it for a normal dependency update.

> Update jackson-databind to 2.9.9.1
> --
>
> Key: HADOOP-16451
> URL: https://issues.apache.org/jira/browse/HADOOP-16451
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16451.001.patch, HADOOP-16451.002.patch
>
>
> https://nvd.nist.gov/vuln/detail/CVE-2019-12814
> CVE-2019-12814 flags 2.9.9 as vulnerable. A new version 2.9.9.1 is available.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16451) Update jackson-databind to 2.9.9.1

2019-07-24 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16451:
-
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> Update jackson-databind to 2.9.9.1
> --
>
> Key: HADOOP-16451
> URL: https://issues.apache.org/jira/browse/HADOOP-16451
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16451.001.patch, HADOOP-16451.002.patch
>
>
> https://nvd.nist.gov/vuln/detail/CVE-2019-12814
> CVE-2019-12814 flags 2.9.9 as vulnerable. A new version 2.9.9.1 is available.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16459) Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the RPC layer" to branch-2

2019-07-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892298#comment-16892298
 ] 

Hadoop QA commented on HADOOP-16459:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.0 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
49s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
36s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
49s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
30s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} branch-3.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 57s{color} | {color:orange} root: The patch generated 6 new + 350 unchanged 
- 7 fixed = 356 total (was 357) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 45s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
6s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}188m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.0 Server=19.03.0 Image:yetus/hadoop:e402791 |
| JIRA Issue | HADOOP-16459 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12975705/HADOOP-16266-branch-3.0.000.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 

[jira] [Commented] (HADOOP-16461) Regression: FileSystem cache lock parses XML within the lock

2019-07-24 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892278#comment-16892278
 ] 

Gopal V commented on HADOOP-16461:
--

Linked the lines up and opened a PR.

> Regression: FileSystem cache lock parses XML within the lock
> 
>
> Key: HADOOP-16461
> URL: https://issues.apache.org/jira/browse/HADOOP-16461
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: filecache
>Reporter: Gopal V
>Assignee: Gopal V
>Priority: Major
>
> https://github.com/apache/hadoop/blob/2546e6ece240924af2188bb39b3954a4896e4a4f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L3388
> {code}
>   fs = createFileSystem(uri, conf);
>   synchronized (this) { // refetch the lock again
> FileSystem oldfs = map.get(key);
> if (oldfs != null) { // a file system is created while lock is 
> releasing
>   fs.close(); // close the new file system
>   return oldfs;  // return the old file system
> }
> // now insert the new file system into the map
> if (map.isEmpty()
> && !ShutdownHookManager.get().isShutdownInProgress()) {
>   ShutdownHookManager.get().addShutdownHook(clientFinalizer, 
> SHUTDOWN_HOOK_PRIORITY);
> }
> fs.key = key;
> map.put(key, fs);
> if (conf.getBoolean(
> FS_AUTOMATIC_CLOSE_KEY, FS_AUTOMATIC_CLOSE_DEFAULT)) {
>   toAutoClose.add(key);
> }
> return fs;
>   }
> {code}
> The lock now has a ShutdownHook creation, which ends up doing 
> https://github.com/apache/hadoop/blob/2546e6ece240924af2188bb39b3954a4896e4a4f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ShutdownHookManager.java#L205
> {code}
> HookEntry(Runnable hook, int priority) {
>   this(hook, priority,
>   getShutdownTimeout(new Configuration()),
>   TIME_UNIT_DEFAULT);
> }
> {code}
> which ends up doing a "new Configuration()" within the locked section.
> This indirectly hurts the cache hit scenarios as well, since if the lock on 
> this is held, then the other section cannot be entered either.
> https://github.com/apache/tez/blob/master/tez-runtime-library/src/main/java/org/apache/tez/runtime/library/common/sort/impl/TezSpillRecord.java#L65
> {code}
> I/O Setup 0 State: BLOCKED CPU usage on sample: 6ms
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(URI, Configuration, 
> FileSystem$Cache$Key) FileSystem.java:3345
> org.apache.hadoop.fs.FileSystem$Cache.get(URI, Configuration) 
> FileSystem.java:3320
> org.apache.hadoop.fs.FileSystem.get(URI, Configuration) FileSystem.java:479
> org.apache.hadoop.fs.FileSystem.getLocal(Configuration) FileSystem.java:435
> {code}
> slowing down the RawLocalFileSystem when there are other threads creating 
> HDFS FileSystem objects at the same time.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16461) Regression: FileSystem cache lock parses XML within the lock

2019-07-24 Thread Gopal V (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-16461:
-
Description: 
https://github.com/apache/hadoop/blob/2546e6ece240924af2188bb39b3954a4896e4a4f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L3388

{code}
  fs = createFileSystem(uri, conf);
  synchronized (this) { // refetch the lock again
FileSystem oldfs = map.get(key);
if (oldfs != null) { // a file system is created while lock is releasing
  fs.close(); // close the new file system
  return oldfs;  // return the old file system
}

// now insert the new file system into the map
if (map.isEmpty()
&& !ShutdownHookManager.get().isShutdownInProgress()) {
  ShutdownHookManager.get().addShutdownHook(clientFinalizer, 
SHUTDOWN_HOOK_PRIORITY);
}
fs.key = key;
map.put(key, fs);
if (conf.getBoolean(
FS_AUTOMATIC_CLOSE_KEY, FS_AUTOMATIC_CLOSE_DEFAULT)) {
  toAutoClose.add(key);
}
return fs;
  }
{code}

The lock now has a ShutdownHook creation, which ends up doing 

https://github.com/apache/hadoop/blob/2546e6ece240924af2188bb39b3954a4896e4a4f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ShutdownHookManager.java#L205
{code}
HookEntry(Runnable hook, int priority) {
  this(hook, priority,
  getShutdownTimeout(new Configuration()),
  TIME_UNIT_DEFAULT);
}
{code}

which ends up doing a "new Configuration()" within the locked section.

This indirectly hurts the cache hit scenarios as well, since if the lock on 
this is held, then the other section cannot be entered either.

https://github.com/apache/tez/blob/master/tez-runtime-library/src/main/java/org/apache/tez/runtime/library/common/sort/impl/TezSpillRecord.java#L65

{code}
I/O Setup 0 State: BLOCKED CPU usage on sample: 6ms
org.apache.hadoop.fs.FileSystem$Cache.getInternal(URI, Configuration, 
FileSystem$Cache$Key) FileSystem.java:3345
org.apache.hadoop.fs.FileSystem$Cache.get(URI, Configuration) 
FileSystem.java:3320
org.apache.hadoop.fs.FileSystem.get(URI, Configuration) FileSystem.java:479
org.apache.hadoop.fs.FileSystem.getLocal(Configuration) FileSystem.java:435
{code}

slowing down the RawLocalFileSystem when there are other threads creating HDFS 
FileSystem objects at the same time.

  was:
{code}
  fs = createFileSystem(uri, conf);
  synchronized (this) { // refetch the lock again
FileSystem oldfs = map.get(key);
if (oldfs != null) { // a file system is created while lock is releasing
  fs.close(); // close the new file system
  return oldfs;  // return the old file system
}

// now insert the new file system into the map
if (map.isEmpty()
&& !ShutdownHookManager.get().isShutdownInProgress()) {
  ShutdownHookManager.get().addShutdownHook(clientFinalizer, 
SHUTDOWN_HOOK_PRIORITY);
}
fs.key = key;
map.put(key, fs);
if (conf.getBoolean(
FS_AUTOMATIC_CLOSE_KEY, FS_AUTOMATIC_CLOSE_DEFAULT)) {
  toAutoClose.add(key);
}
return fs;
  }
{code}

The lock now has a ShutdownHook creation, which ends up doing 

{code}
HookEntry(Runnable hook, int priority) {
  this(hook, priority,
  getShutdownTimeout(new Configuration()),
  TIME_UNIT_DEFAULT);
}
{code}

which ends up doing a "new Configuration()" within the locked section.

This indirectly hurts the cache hit scenarios as well, since if the lock on 
this is held, then the other section cannot be entered either.

{code}
I/O Setup 0 State: BLOCKED CPU usage on sample: 6ms
org.apache.hadoop.fs.FileSystem$Cache.getInternal(URI, Configuration, 
FileSystem$Cache$Key) FileSystem.java:3345
org.apache.hadoop.fs.FileSystem$Cache.get(URI, Configuration) 
FileSystem.java:3320
org.apache.hadoop.fs.FileSystem.get(URI, Configuration) FileSystem.java:479
org.apache.hadoop.fs.FileSystem.getLocal(Configuration) FileSystem.java:435
{code}

slowing down the RawLocalFileSystem when there are other threads creating HDFS 
FileSystem objects at the same time.


> Regression: FileSystem cache lock parses XML within the lock
> 
>
> Key: HADOOP-16461
> URL: https://issues.apache.org/jira/browse/HADOOP-16461
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: filecache
>Reporter: Gopal V
>Assignee: Gopal V
>Priority: Major
>
> https://github.com/apache/hadoop/blob/2546e6ece240924af2188bb39b3954a4896e4a4f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L3388
> {code}
>   fs = createFileSystem(uri, conf);
>   

[GitHub] [hadoop] t3rmin4t0r opened a new pull request #1157: HADOOP-16461. Regression: FileSystem cache lock parses XML within the lock

2019-07-24 Thread GitBox
t3rmin4t0r opened a new pull request #1157: HADOOP-16461. Regression: 
FileSystem cache lock parses XML within the lock
URL: https://github.com/apache/hadoop/pull/1157
 
 
   Instead of parsing a new configuration, rely on existing conf object within 
FileSystem


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1152: HDDS-1817. GetKey fails with IllegalArgumentException.

2019-07-24 Thread GitBox
anuengineer commented on issue #1152: HDDS-1817. GetKey fails with 
IllegalArgumentException.
URL: https://github.com/apache/hadoop/pull/1152#issuecomment-514836331
 
 
   @nandakumar131  Thank you for the contribution. I have committed this patch 
to both the trunk and Ozone-0.4.1 branches.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1152: HDDS-1817. GetKey fails with IllegalArgumentException.

2019-07-24 Thread GitBox
anuengineer closed pull request #1152: HDDS-1817. GetKey fails with 
IllegalArgumentException.
URL: https://github.com/apache/hadoop/pull/1152
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] surmountian commented on issue #466: HDFS-14201. Ability to disallow safemode NN to become active

2019-07-24 Thread GitBox
surmountian commented on issue #466: HDFS-14201. Ability to disallow safemode 
NN to become active
URL: https://github.com/apache/hadoop/pull/466#issuecomment-514832189
 
 
   > @surmountian IIUC, this issue has resolved and merged to trunk. Please ref 
to: https://issues.apache.org/jira/browse/HDFS-14201. This PR could be closed 
now. cc @goiri
   > BTW 'surmountian', 'Xiao Liang' is same one at GitHub and JIRA? ^_^
   
   yes :-)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1156: HDDS-1830 OzoneManagerDoubleBuffer#stop should wait for daemon thread to die

2019-07-24 Thread GitBox
bharatviswa504 commented on a change in pull request #1156: HDDS-1830 
OzoneManagerDoubleBuffer#stop should wait for daemon thread to die
URL: https://github.com/apache/hadoop/pull/1156#discussion_r307048203
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -64,7 +65,7 @@
   private final OMMetadataManager omMetadataManager;
   private final AtomicLong flushedTransactionCount = new AtomicLong(0);
   private final AtomicLong flushIterations = new AtomicLong(0);
-  private volatile boolean isRunning;
+  private final AtomicBoolean isRunning = new AtomicBoolean(true);
 
 Review comment:
   IsRunning to true should be set only after deamon.start() right?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16461) Regression: FileSystem cache lock parses XML within the lock

2019-07-24 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892245#comment-16892245
 ] 

Steve Loughran commented on HADOOP-16461:
-

gopal, can you give us files & lines rather than just code snippets. thanks

> Regression: FileSystem cache lock parses XML within the lock
> 
>
> Key: HADOOP-16461
> URL: https://issues.apache.org/jira/browse/HADOOP-16461
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: filecache
>Reporter: Gopal V
>Assignee: Gopal V
>Priority: Major
>
> {code}
>   fs = createFileSystem(uri, conf);
>   synchronized (this) { // refetch the lock again
> FileSystem oldfs = map.get(key);
> if (oldfs != null) { // a file system is created while lock is 
> releasing
>   fs.close(); // close the new file system
>   return oldfs;  // return the old file system
> }
> // now insert the new file system into the map
> if (map.isEmpty()
> && !ShutdownHookManager.get().isShutdownInProgress()) {
>   ShutdownHookManager.get().addShutdownHook(clientFinalizer, 
> SHUTDOWN_HOOK_PRIORITY);
> }
> fs.key = key;
> map.put(key, fs);
> if (conf.getBoolean(
> FS_AUTOMATIC_CLOSE_KEY, FS_AUTOMATIC_CLOSE_DEFAULT)) {
>   toAutoClose.add(key);
> }
> return fs;
>   }
> {code}
> The lock now has a ShutdownHook creation, which ends up doing 
> {code}
> HookEntry(Runnable hook, int priority) {
>   this(hook, priority,
>   getShutdownTimeout(new Configuration()),
>   TIME_UNIT_DEFAULT);
> }
> {code}
> which ends up doing a "new Configuration()" within the locked section.
> This indirectly hurts the cache hit scenarios as well, since if the lock on 
> this is held, then the other section cannot be entered either.
> {code}
> I/O Setup 0 State: BLOCKED CPU usage on sample: 6ms
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(URI, Configuration, 
> FileSystem$Cache$Key) FileSystem.java:3345
> org.apache.hadoop.fs.FileSystem$Cache.get(URI, Configuration) 
> FileSystem.java:3320
> org.apache.hadoop.fs.FileSystem.get(URI, Configuration) FileSystem.java:479
> org.apache.hadoop.fs.FileSystem.getLocal(Configuration) FileSystem.java:435
> {code}
> slowing down the RawLocalFileSystem when there are other threads creating 
> HDFS FileSystem objects at the same time.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16455) ABFS: Implement the access method

2019-07-24 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892242#comment-16892242
 ] 

Steve Loughran commented on HADOOP-16455:
-

which access method?

> ABFS: Implement the access method
> -
>
> Key: HADOOP-16455
> URL: https://issues.apache.org/jira/browse/HADOOP-16455
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>
> Implement the access method



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl commented on issue #1156: HDDS-1830 OzoneManagerDoubleBuffer#stop should wait for daemon thread to die

2019-07-24 Thread GitBox
smengcl commented on issue #1156: HDDS-1830 OzoneManagerDoubleBuffer#stop 
should wait for daemon thread to die
URL: https://github.com/apache/hadoop/pull/1156#issuecomment-514821991
 
 
   Oops. Thanks for pointing it out @arp7 . Just updated the commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1153: HDDS-1855. TestStorageContainerManager#testScmProcessDatanodeHeartbeat is failing.

2019-07-24 Thread GitBox
hadoop-yetus commented on issue #1153: HDDS-1855. 
TestStorageContainerManager#testScmProcessDatanodeHeartbeat is failing.
URL: https://github.com/apache/hadoop/pull/1153#issuecomment-514820415
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 614 | trunk passed |
   | +1 | compile | 361 | trunk passed |
   | +1 | checkstyle | 68 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 832 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 427 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 622 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 530 | the patch passed |
   | +1 | compile | 358 | the patch passed |
   | +1 | javac | 358 | the patch passed |
   | +1 | checkstyle | 70 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 643 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | the patch passed |
   | +1 | findbugs | 690 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 285 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2070 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 7696 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1153/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1153 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 72eedd3ec688 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cb69700 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1153/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1153/1/testReport/ |
   | Max. process+thread count | 4358 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1153/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1156: HDDS-1830 OzoneManagerDoubleBuffer#stop should wait for daemon thread to die

2019-07-24 Thread GitBox
arp7 commented on a change in pull request #1156: HDDS-1830 
OzoneManagerDoubleBuffer#stop should wait for daemon thread to die
URL: https://github.com/apache/hadoop/pull/1156#discussion_r307044362
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -64,7 +65,7 @@
   private final OMMetadataManager omMetadataManager;
   private final AtomicLong flushedTransactionCount = new AtomicLong(0);
   private final AtomicLong flushIterations = new AtomicLong(0);
-  private volatile boolean isRunning;
+  private AtomicBoolean isRunning = new AtomicBoolean(true);
 
 Review comment:
   Let's also make it final.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16459) Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the RPC layer" to branch-2

2019-07-24 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892216#comment-16892216
 ] 

Erik Krogen edited comment on HADOOP-16459 at 7/24/19 10:07 PM:


I've put up a branch-2 patch as well. It has two additional modifications from 
the branch-3.0 patch:
* A lamba for {{GenericTestUtils.waitFor()}} is replaced with an anonymous 
subclass
* The {{RpcScheduler}} interface can no longer have default methods, since 
branch-2 uses Java 7. Unfortunately Java 7 has no way to emulate this behavior, 
so if users have a custom {{RpcScheduler}} implementation, it will break with 
this change. Our [compatibility 
policy|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/InterfaceClassification.html]
 states that it is acceptable for us to make this breaking change in a minor 
version release since this interface is marked as {{LimitedPrivate}} / 
{{Evolving}}.

Edit: v000 patch for branch-2 was old and still had compilation issues. I just 
put up v001 with the correct version. My mistake.


was (Author: xkrogen):
I've put up a branch-2 patch as well. It has two additional modifications from 
the branch-3.0 patch:
* A lamba for {{GenericTestUtils.waitFor()}} is replaced with an anonymous 
subclass
* The {{RpcScheduler}} interface can no longer have default methods, since 
branch-2 uses Java 7. Unfortunately Java 7 has no way to emulate this behavior, 
so if users have a custom {{RpcScheduler}} implementation, it will break with 
this change. Our [compatibility 
policy|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/InterfaceClassification.html]
 states that it is acceptable for us to make this breaking change in a minor 
version release since this interface is marked as {{LimitedPrivate}} / 
{{Evolving}}.

> Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the 
> RPC layer" to branch-2
> 
>
> Key: HADOOP-16459
> URL: https://issues.apache.org/jira/browse/HADOOP-16459
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16266-branch-2.000.patch, 
> HADOOP-16266-branch-2.001.patch, HADOOP-16266-branch-3.0.000.patch, 
> HADOOP-16266-branch-3.1.000.patch, HADOOP-16266-branch-3.2.000.patch
>
>
> We would like to target pulling HADOOP-16266, an important operability 
> enhancement and prerequisite for HDFS-14403, into branch-2.
> It's only present in trunk now so we also need to backport through the 3.x 
> lines.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16459) Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the RPC layer" to branch-2

2019-07-24 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-16459:
-
Attachment: HADOOP-16266-branch-2.001.patch

> Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the 
> RPC layer" to branch-2
> 
>
> Key: HADOOP-16459
> URL: https://issues.apache.org/jira/browse/HADOOP-16459
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16266-branch-2.000.patch, 
> HADOOP-16266-branch-2.001.patch, HADOOP-16266-branch-3.0.000.patch, 
> HADOOP-16266-branch-3.1.000.patch, HADOOP-16266-branch-3.2.000.patch
>
>
> We would like to target pulling HADOOP-16266, an important operability 
> enhancement and prerequisite for HDFS-14403, into branch-2.
> It's only present in trunk now so we also need to backport through the 3.x 
> lines.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16461) Regression: FileSystem cache lock parses XML within the lock

2019-07-24 Thread Gopal V (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V reassigned HADOOP-16461:


Assignee: Gopal V

> Regression: FileSystem cache lock parses XML within the lock
> 
>
> Key: HADOOP-16461
> URL: https://issues.apache.org/jira/browse/HADOOP-16461
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: filecache
>Reporter: Gopal V
>Assignee: Gopal V
>Priority: Major
>
> {code}
>   fs = createFileSystem(uri, conf);
>   synchronized (this) { // refetch the lock again
> FileSystem oldfs = map.get(key);
> if (oldfs != null) { // a file system is created while lock is 
> releasing
>   fs.close(); // close the new file system
>   return oldfs;  // return the old file system
> }
> // now insert the new file system into the map
> if (map.isEmpty()
> && !ShutdownHookManager.get().isShutdownInProgress()) {
>   ShutdownHookManager.get().addShutdownHook(clientFinalizer, 
> SHUTDOWN_HOOK_PRIORITY);
> }
> fs.key = key;
> map.put(key, fs);
> if (conf.getBoolean(
> FS_AUTOMATIC_CLOSE_KEY, FS_AUTOMATIC_CLOSE_DEFAULT)) {
>   toAutoClose.add(key);
> }
> return fs;
>   }
> {code}
> The lock now has a ShutdownHook creation, which ends up doing 
> {code}
> HookEntry(Runnable hook, int priority) {
>   this(hook, priority,
>   getShutdownTimeout(new Configuration()),
>   TIME_UNIT_DEFAULT);
> }
> {code}
> which ends up doing a "new Configuration()" within the locked section.
> This indirectly hurts the cache hit scenarios as well, since if the lock on 
> this is held, then the other section cannot be entered either.
> {code}
> I/O Setup 0 State: BLOCKED CPU usage on sample: 6ms
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(URI, Configuration, 
> FileSystem$Cache$Key) FileSystem.java:3345
> org.apache.hadoop.fs.FileSystem$Cache.get(URI, Configuration) 
> FileSystem.java:3320
> org.apache.hadoop.fs.FileSystem.get(URI, Configuration) FileSystem.java:479
> org.apache.hadoop.fs.FileSystem.getLocal(Configuration) FileSystem.java:435
> {code}
> slowing down the RawLocalFileSystem when there are other threads creating 
> HDFS FileSystem objects at the same time.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl opened a new pull request #1156: HDDS-1830 OzoneManagerDoubleBuffer#stop should wait for daemon thread to die

2019-07-24 Thread GitBox
smengcl opened a new pull request #1156: HDDS-1830 
OzoneManagerDoubleBuffer#stop should wait for daemon thread to die
URL: https://github.com/apache/hadoop/pull/1156
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16461) Regression: FileSystem cache lock parses XML within the lock

2019-07-24 Thread Gopal V (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-16461:
-
Description: 
{code}
  fs = createFileSystem(uri, conf);
  synchronized (this) { // refetch the lock again
FileSystem oldfs = map.get(key);
if (oldfs != null) { // a file system is created while lock is releasing
  fs.close(); // close the new file system
  return oldfs;  // return the old file system
}

// now insert the new file system into the map
if (map.isEmpty()
&& !ShutdownHookManager.get().isShutdownInProgress()) {
  ShutdownHookManager.get().addShutdownHook(clientFinalizer, 
SHUTDOWN_HOOK_PRIORITY);
}
fs.key = key;
map.put(key, fs);
if (conf.getBoolean(
FS_AUTOMATIC_CLOSE_KEY, FS_AUTOMATIC_CLOSE_DEFAULT)) {
  toAutoClose.add(key);
}
return fs;
  }
{code}

The lock now has a ShutdownHook creation, which ends up doing 

{code}
HookEntry(Runnable hook, int priority) {
  this(hook, priority,
  getShutdownTimeout(new Configuration()),
  TIME_UNIT_DEFAULT);
}
{code}

which ends up doing a "new Configuration()" within the locked section.

This indirectly hurts the cache hit scenarios as well, since if the lock on 
this is held, then the other section cannot be entered either.

{code}
I/O Setup 0 State: BLOCKED CPU usage on sample: 6ms
org.apache.hadoop.fs.FileSystem$Cache.getInternal(URI, Configuration, 
FileSystem$Cache$Key) FileSystem.java:3345
org.apache.hadoop.fs.FileSystem$Cache.get(URI, Configuration) 
FileSystem.java:3320
org.apache.hadoop.fs.FileSystem.get(URI, Configuration) FileSystem.java:479
org.apache.hadoop.fs.FileSystem.getLocal(Configuration) FileSystem.java:435
{code}

slowing down the RawLocalFileSystem when there are other threads creating HDFS 
FileSystem objects at the same time.

  was:
{code}
  fs = createFileSystem(uri, conf);
  synchronized (this) { // refetch the lock again
FileSystem oldfs = map.get(key);
if (oldfs != null) { // a file system is created while lock is releasing
  fs.close(); // close the new file system
  return oldfs;  // return the old file system
}

// now insert the new file system into the map
if (map.isEmpty()
&& !ShutdownHookManager.get().isShutdownInProgress()) {
  ShutdownHookManager.get().addShutdownHook(clientFinalizer, 
SHUTDOWN_HOOK_PRIORITY);
}
fs.key = key;
map.put(key, fs);
if (conf.getBoolean(
FS_AUTOMATIC_CLOSE_KEY, FS_AUTOMATIC_CLOSE_DEFAULT)) {
  toAutoClose.add(key);
}
return fs;
  }
{code}

The lock now has a ShutdownHook creation, which ends up doing 

{code}
HookEntry(Runnable hook, int priority) {
  this(hook, priority,
  getShutdownTimeout(new Configuration()),
  TIME_UNIT_DEFAULT);
}
{code}

which ends up doing a "new Configuration()" within the locked section.




> Regression: FileSystem cache lock parses XML within the lock
> 
>
> Key: HADOOP-16461
> URL: https://issues.apache.org/jira/browse/HADOOP-16461
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: filecache
>Reporter: Gopal V
>Priority: Major
>
> {code}
>   fs = createFileSystem(uri, conf);
>   synchronized (this) { // refetch the lock again
> FileSystem oldfs = map.get(key);
> if (oldfs != null) { // a file system is created while lock is 
> releasing
>   fs.close(); // close the new file system
>   return oldfs;  // return the old file system
> }
> // now insert the new file system into the map
> if (map.isEmpty()
> && !ShutdownHookManager.get().isShutdownInProgress()) {
>   ShutdownHookManager.get().addShutdownHook(clientFinalizer, 
> SHUTDOWN_HOOK_PRIORITY);
> }
> fs.key = key;
> map.put(key, fs);
> if (conf.getBoolean(
> FS_AUTOMATIC_CLOSE_KEY, FS_AUTOMATIC_CLOSE_DEFAULT)) {
>   toAutoClose.add(key);
> }
> return fs;
>   }
> {code}
> The lock now has a ShutdownHook creation, which ends up doing 
> {code}
> HookEntry(Runnable hook, int priority) {
>   this(hook, priority,
>   getShutdownTimeout(new Configuration()),
>   TIME_UNIT_DEFAULT);
> }
> {code}
> which ends up doing a "new Configuration()" within the locked section.
> This indirectly hurts the cache hit scenarios as well, since if the lock on 
> this is held, then the other section cannot be entered either.
> {code}
> I/O Setup 0 State: BLOCKED CPU usage on sample: 6ms
> 

[jira] [Created] (HADOOP-16461) Regression: FileSystem cache lock parses XML within the lock

2019-07-24 Thread Gopal V (JIRA)
Gopal V created HADOOP-16461:


 Summary: Regression: FileSystem cache lock parses XML within the 
lock
 Key: HADOOP-16461
 URL: https://issues.apache.org/jira/browse/HADOOP-16461
 Project: Hadoop Common
  Issue Type: Bug
  Components: filecache
Reporter: Gopal V


{code}
  fs = createFileSystem(uri, conf);
  synchronized (this) { // refetch the lock again
FileSystem oldfs = map.get(key);
if (oldfs != null) { // a file system is created while lock is releasing
  fs.close(); // close the new file system
  return oldfs;  // return the old file system
}

// now insert the new file system into the map
if (map.isEmpty()
&& !ShutdownHookManager.get().isShutdownInProgress()) {
  ShutdownHookManager.get().addShutdownHook(clientFinalizer, 
SHUTDOWN_HOOK_PRIORITY);
}
fs.key = key;
map.put(key, fs);
if (conf.getBoolean(
FS_AUTOMATIC_CLOSE_KEY, FS_AUTOMATIC_CLOSE_DEFAULT)) {
  toAutoClose.add(key);
}
return fs;
  }
{code}

The lock now has a ShutdownHook creation, which ends up doing 

{code}
HookEntry(Runnable hook, int priority) {
  this(hook, priority,
  getShutdownTimeout(new Configuration()),
  TIME_UNIT_DEFAULT);
}
{code}

which ends up doing a "new Configuration()" within the locked section.





--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1155: HDDS-1842. Implement S3 Abort MPU request to use Cache and DoubleBuffer.

2019-07-24 Thread GitBox
bharatviswa504 commented on issue #1155: HDDS-1842. Implement S3 Abort MPU 
request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1155#issuecomment-514813324
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16459) Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the RPC layer" to branch-2

2019-07-24 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892216#comment-16892216
 ] 

Erik Krogen commented on HADOOP-16459:
--

I've put up a branch-2 patch as well. It has two additional modifications from 
the branch-3.0 patch:
* A lamba for {{GenericTestUtils.waitFor()}} is replaced with an anonymous 
subclass
* The {{RpcScheduler}} interface can no longer have default methods, since 
branch-2 uses Java 7. Unfortunately Java 7 has no way to emulate this behavior, 
so if users have a custom {{RpcScheduler}} implementation, it will break with 
this change. Our [compatibility 
policy|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/InterfaceClassification.html]
 states that it is acceptable for us to make this breaking change in a minor 
version release since this interface is marked as {{LimitedPrivate}} / 
{{Evolving}}.

> Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the 
> RPC layer" to branch-2
> 
>
> Key: HADOOP-16459
> URL: https://issues.apache.org/jira/browse/HADOOP-16459
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16266-branch-2.000.patch, 
> HADOOP-16266-branch-3.0.000.patch, HADOOP-16266-branch-3.1.000.patch, 
> HADOOP-16266-branch-3.2.000.patch
>
>
> We would like to target pulling HADOOP-16266, an important operability 
> enhancement and prerequisite for HDFS-14403, into branch-2.
> It's only present in trunk now so we also need to backport through the 3.x 
> lines.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16451) Update jackson-databind to 2.9.9.1

2019-07-24 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892215#comment-16892215
 ] 

Siyao Meng commented on HADOOP-16451:
-

Thanks [~aajisaka] [~jojochuang] for the review!

> Update jackson-databind to 2.9.9.1
> --
>
> Key: HADOOP-16451
> URL: https://issues.apache.org/jira/browse/HADOOP-16451
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16451.001.patch, HADOOP-16451.002.patch
>
>
> https://nvd.nist.gov/vuln/detail/CVE-2019-12814
> CVE-2019-12814 flags 2.9.9 as vulnerable. A new version 2.9.9.1 is available.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16459) Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the RPC layer" to branch-2

2019-07-24 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-16459:
-
Attachment: HADOOP-16266-branch-2.000.patch

> Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the 
> RPC layer" to branch-2
> 
>
> Key: HADOOP-16459
> URL: https://issues.apache.org/jira/browse/HADOOP-16459
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16266-branch-2.000.patch, 
> HADOOP-16266-branch-3.0.000.patch, HADOOP-16266-branch-3.1.000.patch, 
> HADOOP-16266-branch-3.2.000.patch
>
>
> We would like to target pulling HADOOP-16266, an important operability 
> enhancement and prerequisite for HDFS-14403, into branch-2.
> It's only present in trunk now so we also need to backport through the 3.x 
> lines.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16459) Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the RPC layer" to branch-2

2019-07-24 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892198#comment-16892198
 ] 

Erik Krogen commented on HADOOP-16459:
--

I should note that all of the patches attached include the follow-on commit to 
fix the issue that was discovered after HADOOP-16266 was committed.

> Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the 
> RPC layer" to branch-2
> 
>
> Key: HADOOP-16459
> URL: https://issues.apache.org/jira/browse/HADOOP-16459
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16266-branch-3.0.000.patch, 
> HADOOP-16266-branch-3.1.000.patch, HADOOP-16266-branch-3.2.000.patch
>
>
> We would like to target pulling HADOOP-16266, an important operability 
> enhancement and prerequisite for HDFS-14403, into branch-2.
> It's only present in trunk now so we also need to backport through the 3.x 
> lines.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1152: HDDS-1817. GetKey fails with IllegalArgumentException.

2019-07-24 Thread GitBox
hadoop-yetus commented on issue #1152: HDDS-1817. GetKey fails with 
IllegalArgumentException.
URL: https://github.com/apache/hadoop/pull/1152#issuecomment-514807454
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 74 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 616 | trunk passed |
   | +1 | compile | 373 | trunk passed |
   | +1 | checkstyle | 77 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 877 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | trunk passed |
   | 0 | spotbugs | 430 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 622 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 549 | the patch passed |
   | +1 | compile | 366 | the patch passed |
   | +1 | javac | 366 | the patch passed |
   | +1 | checkstyle | 68 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 615 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 147 | the patch passed |
   | +1 | findbugs | 638 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 340 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1944 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 7719 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1152/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1152 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 0bc089c7fcc3 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cb69700 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1152/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1152/1/testReport/ |
   | Max. process+thread count | 4192 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1152/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16459) Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the RPC layer" to branch-2

2019-07-24 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892188#comment-16892188
 ] 

Erik Krogen edited comment on HADOOP-16459 at 7/24/19 9:18 PM:
---

Attached branch-3.0 patch. The merge difference was smaller than I expected; no 
logic changes were necessary. The signatures of some methods that appeared in 
the vicinity of changes were different, but not in a way that affected this 
patch.

One notable difference was that {{TestConsistentReadsObserver}} doesn't yet 
exist in branch-3.0, so the modifications to that test were excluded. If 
HDFS-14573 is finalized before this goes in, those changes should be brought 
back.


was (Author: xkrogen):
Attached branch-3.0 patch. The merge difference was smaller than I expected; no 
logic changes were necessary. The signatures of some methods that appeared in 
the vicinity of changes were different, but not in a way that affected this 
patch.

> Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the 
> RPC layer" to branch-2
> 
>
> Key: HADOOP-16459
> URL: https://issues.apache.org/jira/browse/HADOOP-16459
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16266-branch-3.0.000.patch, 
> HADOOP-16266-branch-3.1.000.patch, HADOOP-16266-branch-3.2.000.patch
>
>
> We would like to target pulling HADOOP-16266, an important operability 
> enhancement and prerequisite for HDFS-14403, into branch-2.
> It's only present in trunk now so we also need to backport through the 3.x 
> lines.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16459) Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the RPC layer" to branch-2

2019-07-24 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892188#comment-16892188
 ] 

Erik Krogen commented on HADOOP-16459:
--

Attached branch-3.0 patch. The merge difference was smaller than I expected; no 
logic changes were necessary. The signatures of some methods that appeared in 
the vicinity of changes were different, but not in a way that affected this 
patch.

> Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the 
> RPC layer" to branch-2
> 
>
> Key: HADOOP-16459
> URL: https://issues.apache.org/jira/browse/HADOOP-16459
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16266-branch-3.0.000.patch, 
> HADOOP-16266-branch-3.1.000.patch, HADOOP-16266-branch-3.2.000.patch
>
>
> We would like to target pulling HADOOP-16266, an important operability 
> enhancement and prerequisite for HDFS-14403, into branch-2.
> It's only present in trunk now so we also need to backport through the 3.x 
> lines.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1155: HDDS-1842. Implement S3 Abort MPU request to use Cache and DoubleBuffer.

2019-07-24 Thread GitBox
arp7 commented on a change in pull request #1155: HDDS-1842. Implement S3 Abort 
MPU request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1155#discussion_r307023290
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/s3/multipart/S3MultipartUploadAbortResponse.java
 ##
 @@ -0,0 +1,83 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.s3.multipart;
+
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmMultipartKeyInfo;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.PartKeyInfo;
+import org.apache.hadoop.utils.db.BatchOperation;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.TreeMap;
+
+/**
+ * Response for Multipart Abort Request.
+ */
+public class S3MultipartUploadAbortResponse extends OMClientResponse {
+
+  private String multipartKey;
+  private long timeStamp;
+  private OmMultipartKeyInfo omMultipartKeyInfo;
+
+  public S3MultipartUploadAbortResponse(String multipartKey,
+  long timeStamp,
+  OmMultipartKeyInfo omMultipartKeyInfo,
+  OMResponse omResponse) {
+super(omResponse);
+this.multipartKey = multipartKey;
+this.timeStamp = timeStamp;
+this.omMultipartKeyInfo = omMultipartKeyInfo;
+  }
+
+  @Override
+  public void addToDBBatch(OMMetadataManager omMetadataManager,
+  BatchOperation batchOperation) throws IOException {
+
+if (getOMResponse().getStatus() == OzoneManagerProtocolProtos.Status.OK) {
+
+  // Delete from openKey table and multipart info table.
+  omMetadataManager.getOpenKeyTable().deleteWithBatch(batchOperation,
+  multipartKey);
+  omMetadataManager.getMultipartInfoTable().deleteWithBatch(batchOperation,
 
 Review comment:
   Indentation looks wrong perhaps?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #1155: HDDS-1842. Implement S3 Abort MPU request to use Cache and DoubleBuffer.

2019-07-24 Thread GitBox
bharatviswa504 opened a new pull request #1155: HDDS-1842. Implement S3 Abort 
MPU request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1155
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1140: HDDS-1819. Implement S3 Commit MPU request to use Cache and DoubleBuffer.

2019-07-24 Thread GitBox
bharatviswa504 commented on issue #1140: HDDS-1819. Implement S3 Commit MPU 
request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1140#issuecomment-514799385
 
 
   Thank You @arp7 and @xiaoyuyao for the review.
   I have committed this to trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #1140: HDDS-1819. Implement S3 Commit MPU request to use Cache and DoubleBuffer.

2019-07-24 Thread GitBox
bharatviswa504 merged pull request #1140: HDDS-1819. Implement S3 Commit MPU 
request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1140
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16459) Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the RPC layer" to branch-2

2019-07-24 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-16459:
-
Attachment: HADOOP-16266-branch-3.0.000.patch

> Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the 
> RPC layer" to branch-2
> 
>
> Key: HADOOP-16459
> URL: https://issues.apache.org/jira/browse/HADOOP-16459
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16266-branch-3.0.000.patch, 
> HADOOP-16266-branch-3.1.000.patch, HADOOP-16266-branch-3.2.000.patch
>
>
> We would like to target pulling HADOOP-16266, an important operability 
> enhancement and prerequisite for HDFS-14403, into branch-2.
> It's only present in trunk now so we also need to backport through the 3.x 
> lines.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hgadre opened a new pull request #1154: [HDDS-1200] Add support for checksum verification in data scrubber

2019-07-24 Thread GitBox
hgadre opened a new pull request #1154: [HDDS-1200] Add support for checksum 
verification in data scrubber
URL: https://github.com/apache/hadoop/pull/1154
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16460) ABFS: fix for Sever Name Indication (SNI)

2019-07-24 Thread Thomas Marquardt (JIRA)
Thomas Marquardt created HADOOP-16460:
-

 Summary: ABFS: fix for Sever Name Indication (SNI)
 Key: HADOOP-16460
 URL: https://issues.apache.org/jira/browse/HADOOP-16460
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Affects Versions: 3.1.2
Reporter: Thomas Marquardt
Assignee: Vishwajeet Dusane


We need to update wildfly-openssl to 1.0.7.Final in ./hadoop-project/pom.xml.

 

ABFS depends on wildfly-openssl for secure sockets due to the performance 
improvements. The current wildfly-openssl does not support Server Name 
Indication (SNI). A fix was made in 
https://github.com/wildfly/wildfly-openssl/issues/59 and there is an official 
release of wildfly-openssl with the fix 
([https://github.com/wildfly/wildfly-openssl/releases/tag/1.0.7.Final)|https://github.com/wildfly/wildfly-openssl/releases/tag/1.0.7.Final).].
  The fix has been validated.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16459) Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the RPC layer" to branch-2

2019-07-24 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-16459:
-
Status: Patch Available  (was: Open)

Attached branch-3.2 and branch-3.1 patches, both of which had only very minor 
merge conflicts (imports). {{branch-3.0}} has some non-trivial differences; I 
am working on a patch now.

> Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the 
> RPC layer" to branch-2
> 
>
> Key: HADOOP-16459
> URL: https://issues.apache.org/jira/browse/HADOOP-16459
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16266-branch-3.1.000.patch, 
> HADOOP-16266-branch-3.2.000.patch
>
>
> We would like to target pulling HADOOP-16266, an important operability 
> enhancement and prerequisite for HDFS-14403, into branch-2.
> It's only present in trunk now so we also need to backport through the 3.x 
> lines.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16459) Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the RPC layer" to branch-2

2019-07-24 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-16459:
-
Attachment: HADOOP-16266-branch-3.2.000.patch
HADOOP-16266-branch-3.1.000.patch

> Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the 
> RPC layer" to branch-2
> 
>
> Key: HADOOP-16459
> URL: https://issues.apache.org/jira/browse/HADOOP-16459
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16266-branch-3.1.000.patch, 
> HADOOP-16266-branch-3.2.000.patch
>
>
> We would like to target pulling HADOOP-16266, an important operability 
> enhancement and prerequisite for HDFS-14403, into branch-2.
> It's only present in trunk now so we also need to backport through the 3.x 
> lines.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1033: HDDS-1391 : Add ability in OM to serve delta updates through an API.

2019-07-24 Thread GitBox
hadoop-yetus commented on issue #1033: HDDS-1391 : Add ability in OM to serve 
delta updates through an API.
URL: https://github.com/apache/hadoop/pull/1033#issuecomment-514788362
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 72 | Maven dependency ordering for branch |
   | +1 | mvninstall | 576 | trunk passed |
   | +1 | compile | 352 | trunk passed |
   | +1 | checkstyle | 63 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 789 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 143 | trunk passed |
   | 0 | spotbugs | 412 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 605 | trunk passed |
   | -0 | patch | 451 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | +1 | mvninstall | 533 | the patch passed |
   | +1 | compile | 367 | the patch passed |
   | +1 | cc | 367 | the patch passed |
   | +1 | javac | 367 | the patch passed |
   | -0 | checkstyle | 32 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | -0 | checkstyle | 36 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 628 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 63 | hadoop-hdds generated 1 new + 15 unchanged - 0 fixed = 
16 total (was 15) |
   | +1 | findbugs | 621 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 280 | hadoop-hdds in the patch passed. |
   | -1 | unit | 176 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 35 | The patch does not generate ASF License warnings. |
   | | | 5691 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1033/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1033 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux f57c9f61d1b8 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cb69700 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1033/6/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1033/6/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1033/6/artifact/out/diff-javadoc-javadoc-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1033/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1033/6/testReport/ |
   | Max. process+thread count | 1287 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1033/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 opened a new pull request #1153: HDDS-1855. TestStorageContainerManager#testScmProcessDatanodeHeartbeat is failing.

2019-07-24 Thread GitBox
nandakumar131 opened a new pull request #1153: HDDS-1855. 
TestStorageContainerManager#testScmProcessDatanodeHeartbeat is failing.
URL: https://github.com/apache/hadoop/pull/1153
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 opened a new pull request #1152: HDDS-1817. GetKey fails with IllegalArgumentException.

2019-07-24 Thread GitBox
nandakumar131 opened a new pull request #1152: HDDS-1817. GetKey fails with 
IllegalArgumentException.
URL: https://github.com/apache/hadoop/pull/1152
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16459) Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the RPC layer" to branch-2

2019-07-24 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-16459:
-
Description: 
We would like to target pulling HADOOP-16266, an important operability 
enhancement and prerequisite for HDFS-14403, into branch-2.

It's only present in trunk now so we also need to backport through the 3.x 
lines.

  was:We would like to target pulling HADOOP-16266, an important operability 
enhancement and prerequisite for HDFS-14403, into branch-2.


> Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the 
> RPC layer" to branch-2
> 
>
> Key: HADOOP-16459
> URL: https://issues.apache.org/jira/browse/HADOOP-16459
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
>
> We would like to target pulling HADOOP-16266, an important operability 
> enhancement and prerequisite for HDFS-14403, into branch-2.
> It's only present in trunk now so we also need to backport through the 3.x 
> lines.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1150: HDDS-1816: ContainerStateMachine should limit number of pending apply transactions

2019-07-24 Thread GitBox
bharatviswa504 commented on a change in pull request #1150: HDDS-1816: 
ContainerStateMachine should limit number of pending apply transactions
URL: https://github.com/apache/hadoop/pull/1150#discussion_r306971240
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 ##
 @@ -175,6 +179,10 @@ public ContainerStateMachine(RaftGroupId gid, 
ContainerDispatcher dispatcher,
 final int numContainerOpExecutors = conf.getInt(
 OzoneConfigKeys.DFS_CONTAINER_RATIS_NUM_CONTAINER_OP_EXECUTORS_KEY,
 
OzoneConfigKeys.DFS_CONTAINER_RATIS_NUM_CONTAINER_OP_EXECUTORS_DEFAULT);
+int maxPendingApplyTransactions = conf.getInt(
+
ScmConfigKeys.DFS_CONTAINER_RATIS_STATEMACHINE_MAX_PENDING_APPLY_TRANSACTIONS,
+
ScmConfigKeys.DFS_CONTAINER_RATIS_STATEMACHINE_MAX_PENDING_APPLY_TRANSACTIONS_DEFAULT);
+applyTransactionSemaphore = new Semaphore(maxPendingApplyTransactions);
 
 Review comment:
   Question: Here do we need fair setting, to follow the order?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16245) Enabling SSL within LdapGroupsMapping can break system SSL configs

2019-07-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892093#comment-16892093
 ] 

Hadoop QA commented on HADOOP-16245:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
56s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m  
0s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.0 Server=19.03.0 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16245 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12975665/HADOOP-16245.004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d1d93b08ec29 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cf9ff08 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16419/testReport/ |
| Max. process+thread count | 1388 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16419/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Enabling SSL within 

[jira] [Updated] (HADOOP-16457) Hadoop does not work without Kerberos for simple security

2019-07-24 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16457:
---
Priority: Minor  (was: Major)

> Hadoop does not work without Kerberos for simple security
> -
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Priority: Minor
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16457) Hadoop does not work with Kerberos config in hdfs-site.xml for simple security

2019-07-24 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16457:
---
Summary: Hadoop does not work with Kerberos config in hdfs-site.xml for 
simple security  (was: Hadoop does not work without Kerberos for simple 
security)

> Hadoop does not work with Kerberos config in hdfs-site.xml for simple security
> --
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Priority: Minor
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16457) Hadoop does not work without Kerberos for simple security

2019-07-24 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892087#comment-16892087
 ] 

Eric Yang commented on HADOOP-16457:


This problem is not related to HADOOP-16354.  If 
dfs.datanode.kerberos.principal is set in namenode's hdfs-site.xml, then the 
ServiceAuthorizationManager expects the datanode username in kerberos principal 
format without checking hadoop.security.authentication == simple.  The easy 
solution is removing dfs.datanode.kerberos.principal config from hdfs-site.xml. 
 There might be enhancement in this area to make 
dfs.datanode.kerberos.principal config less abrupt to simple security setting.

> Hadoop does not work without Kerberos for simple security
> -
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16457) Hadoop does not work without Kerberos for simple security

2019-07-24 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reassigned HADOOP-16457:
--

Assignee: (was: Prabhu Joseph)

> Hadoop does not work without Kerberos for simple security
> -
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Priority: Major
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1150: HDDS-1816: ContainerStateMachine should limit number of pending apply transactions

2019-07-24 Thread GitBox
hadoop-yetus commented on issue #1150: HDDS-1816: ContainerStateMachine should 
limit number of pending apply transactions
URL: https://github.com/apache/hadoop/pull/1150#issuecomment-514754477
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for branch |
   | +1 | mvninstall | 637 | trunk passed |
   | +1 | compile | 400 | trunk passed |
   | +1 | checkstyle | 69 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 875 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 160 | trunk passed |
   | 0 | spotbugs | 457 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 676 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 611 | the patch passed |
   | +1 | compile | 411 | the patch passed |
   | +1 | javac | 411 | the patch passed |
   | -0 | checkstyle | 36 | hadoop-hdds: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 661 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 147 | the patch passed |
   | +1 | findbugs | 659 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 324 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2027 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 8039 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestSCMRestart |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1150/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1150 |
   | JIRA Issue | HDDS-1816 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 985262bfb746 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cf9ff08 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1150/2/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1150/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1150/2/testReport/ |
   | Max. process+thread count | 4899 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service U: 
hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1150/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1151: HDDS-1853. Fix failing blockade test-cases.

2019-07-24 Thread GitBox
hadoop-yetus commented on issue #1151: HDDS-1853. Fix failing blockade 
test-cases.
URL: https://github.com/apache/hadoop/pull/1151#issuecomment-514753413
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 8 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 38 | Maven dependency ordering for branch |
   | +1 | mvninstall | 594 | trunk passed |
   | +1 | compile | 371 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | -1 | pylint | 4 | Error running pylint. Please check pylint stderr files. |
   | +1 | shadedclient | 743 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 169 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 552 | the patch passed |
   | +1 | compile | 377 | the patch passed |
   | +1 | javac | 377 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | pylint | 9 | Error running pylint. Please check pylint stderr files. |
   | +1 | pylint | 9 | There were no new pylint issues. |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 638 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 160 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 284 | hadoop-hdds in the patch passed. |
   | -1 | unit | 3536 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 7811 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.hdds.scm.pipeline.TestNodeFailure |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerHandler
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1151/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1151 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient shellcheck shelldocs pylint |
   | uname | Linux e011f9310ef7 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cf9ff08 |
   | Default Java | 1.8.0_212 |
   | pylint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1151/1/artifact/out/branch-pylint-stderr.txt
 |
   | pylint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1151/1/artifact/out/patch-pylint-stderr.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1151/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1151/1/testReport/ |
   | Max. process+thread count | 3951 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/fault-injection-test/network-tests 
hadoop-ozone/dist U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1151/1/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 pylint=1.9.2 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: 

[jira] [Commented] (HADOOP-16458) LocatedFileStatusFetcher.getFileStatuses failing intermittently with s3

2019-07-24 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892077#comment-16892077
 ] 

Steve Loughran commented on HADOOP-16458:
-

happening when the LocatedFileStatusFetcher gets null back from glob; the fact 
that the stack from the raised fault isn't propagated is something I plan to 
fix too. 
{code}
@Override
public Result call() throws Exception {
  Result result = new Result();
  FileSystem fs = path.getFileSystem(conf);
  result.fs = fs;
  FileStatus[] matches = fs.globStatus(path, inputFilter);
  if (matches == null) {   // no matches
result.addError(new IOException("Input path does not exist: " + path)); 
   // so error raised.
  } else if (matches.length == 0) {
result.addError(new IOException("Input Pattern " + path
+ " matches 0 files"));
  } else {
result.matchedFileStatuses = matches;
  }
  return result;
}
{code}
FWIW, I'd actually tighten down the exceptions raised to an FNFE and PathIOE if 
I wasn't worried about breaking things.

> LocatedFileStatusFetcher.getFileStatuses failing intermittently with s3
> ---
>
> Key: HADOOP-16458
> URL: https://issues.apache.org/jira/browse/HADOOP-16458
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
> Environment: S3 + S3Guard
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Intermittent failure of LocatedFileStatusFetcher.getFileStatuses(), which is 
> using globStatus to find files.
> I'd say "turn s3guard on" except this appears to be the case, and the dataset 
> being read is
> over 1h old.
> Which means it is harder than I'd like to blame S3 for what would sound like 
> an inconsistency
> We're hampered by the number of debug level statements in the globber code 
> being approximately none; there's no debugging to turn on. All we know is 
> that globFiles returns null without any explanation.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16433) S3Guard: Filter expired entries and tombstones when listing with MetadataStore#listChildren

2019-07-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892073#comment-16892073
 ] 

Hudson commented on HADOOP-16433:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16978 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16978/])
HADOOP-16433. S3Guard: Filter expired entries and tombstones when (stevel: rev 
7b219778e05a50e33cca75d727e62783322b7f80)
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardOutOfBandOperations.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DDBPathMetadata.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestDirListingMetadata.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStore.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/LocalMetadataStore.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardTtl.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestLocalMetadataStore.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DirListingMetadata.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/PathMetadata.java


> S3Guard: Filter expired entries and tombstones when listing with 
> MetadataStore#listChildren
> ---
>
> Key: HADOOP-16433
> URL: https://issues.apache.org/jira/browse/HADOOP-16433
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Blocker
>
> Currently, we don't filter out entries in {{listChildren}} implementations.
> This can cause bugs and inconsistencies, so this should be fixed.
> It can lead to a status where we can't recover from the following:
> {{guarded and raw (OOB op) clients are doing ops to S3}}
> {noformat}
> Guarded: touch /
> Guarded: touch /
> Guarded: rm / {{-> tombstone in MS}}
> RAW: touch //file.ext {{-> file is hidden with a tombstone}}
> Guarded: ls / {{-> only  will show up in the listing. }}
> {noformat}
> After we change the following code
> {code:java}
>   final List metas = new ArrayList<>();
>   for (Item item : items) {
> DDBPathMetadata meta = itemToPathMetadata(item, username);
> metas.add(meta);
>   }
> {code}
> to 
> {code:java}
> // handle expiry - only add not expired entries to listing.
> if (meta.getLastUpdated() == 0 ||
> !meta.isExpired(ttlTimeProvider.getMetadataTtl(),
> ttlTimeProvider.getNow())) {
>   metas.add(meta);
> }
> {code}
> we will filter out expired entries from the listing, so we can recover form 
> these kind of OOB ops.
> Note:  we have to handle the lastUpdated == 0 case, where the lastUpdated 
> field is not filled in!
> Note: this can only be fixed cleanly after HADOOP-16383 is fixed because we 
> need to have the TTLtimeProvider in MS to handle this internally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 merged pull request #1151: HDDS-1853. Fix failing blockade test-cases.

2019-07-24 Thread GitBox
nandakumar131 merged pull request #1151: HDDS-1853. Fix failing blockade 
test-cases.
URL: https://github.com/apache/hadoop/pull/1151
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on issue #1151: HDDS-1853. Fix failing blockade test-cases.

2019-07-24 Thread GitBox
nandakumar131 commented on issue #1151: HDDS-1853. Fix failing blockade 
test-cases.
URL: https://github.com/apache/hadoop/pull/1151#issuecomment-514748228
 
 
   Test failures are not related.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16459) Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the RPC layer" to branch-2

2019-07-24 Thread Erik Krogen (JIRA)
Erik Krogen created HADOOP-16459:


 Summary: Backport [HADOOP-16266] "Add more fine-grained processing 
time metrics to the RPC layer" to branch-2
 Key: HADOOP-16459
 URL: https://issues.apache.org/jira/browse/HADOOP-16459
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Erik Krogen
Assignee: Erik Krogen


We would like to target pulling HADOOP-16266, an important operability 
enhancement and prerequisite for HDFS-14403, into branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16458) LocatedFileStatusFetcher.getFileStatuses failing intermittently with s3

2019-07-24 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892027#comment-16892027
 ] 

Steve Loughran commented on HADOOP-16458:
-

{code}
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: 
s3a://demo/user/qa/schemaevolution/tests/data/all100k
2019-07-22 14:26:49,833  
org.apache.hadoop.mapred.LocatedFileStatusFetcher.getFileStatuses(LocatedFileStatusFetcher.java:155)
2019-07-22 14:26:49,833  
org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:247)
2019-07-22 14:26:49,834  
org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:325)
2019-07-22 14:26:49,834  
org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:524)
2019-07-22 14:26:49,834  
org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:781)
2019-07-22 14:26:49,834  
org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:243)
{code}

> LocatedFileStatusFetcher.getFileStatuses failing intermittently with s3
> ---
>
> Key: HADOOP-16458
> URL: https://issues.apache.org/jira/browse/HADOOP-16458
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
> Environment: S3 + S3Guard
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Intermittent failure of LocatedFileStatusFetcher.getFileStatuses(), which is 
> using globStatus to find files.
> I'd say "turn s3guard on" except this appears to be the case, and the dataset 
> being read is
> over 1h old.
> Which means it is harder than I'd like to blame S3 for what would sound like 
> an inconsistency
> We're hampered by the number of debug level statements in the globber code 
> being approximately none; there's no debugging to turn on. All we know is 
> that globFiles returns null without any explanation.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16458) LocatedFileStatusFetcher.getFileStatuses failing intermittently with s3

2019-07-24 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16458:
---

 Summary: LocatedFileStatusFetcher.getFileStatuses failing 
intermittently with s3
 Key: HADOOP-16458
 URL: https://issues.apache.org/jira/browse/HADOOP-16458
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
 Environment: S3 + S3Guard
Reporter: Steve Loughran
Assignee: Steve Loughran


Intermittent failure of LocatedFileStatusFetcher.getFileStatuses(), which is 
using globStatus to find files.

I'd say "turn s3guard on" except this appears to be the case, and the dataset 
being read is
over 1h old.

Which means it is harder than I'd like to blame S3 for what would sound like an 
inconsistency

We're hampered by the number of debug level statements in the globber code 
being approximately none; there's no debugging to turn on. All we know is that 
globFiles returns null without any explanation.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16433) S3Guard: Filter expired entries and tombstones when listing with MetadataStore#listChildren

2019-07-24 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16433.
-
Resolution: Fixed

+1, committed to trunk.

Thanks!

> S3Guard: Filter expired entries and tombstones when listing with 
> MetadataStore#listChildren
> ---
>
> Key: HADOOP-16433
> URL: https://issues.apache.org/jira/browse/HADOOP-16433
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Blocker
>
> Currently, we don't filter out entries in {{listChildren}} implementations.
> This can cause bugs and inconsistencies, so this should be fixed.
> It can lead to a status where we can't recover from the following:
> {{guarded and raw (OOB op) clients are doing ops to S3}}
> {noformat}
> Guarded: touch /
> Guarded: touch /
> Guarded: rm / {{-> tombstone in MS}}
> RAW: touch //file.ext {{-> file is hidden with a tombstone}}
> Guarded: ls / {{-> only  will show up in the listing. }}
> {noformat}
> After we change the following code
> {code:java}
>   final List metas = new ArrayList<>();
>   for (Item item : items) {
> DDBPathMetadata meta = itemToPathMetadata(item, username);
> metas.add(meta);
>   }
> {code}
> to 
> {code:java}
> // handle expiry - only add not expired entries to listing.
> if (meta.getLastUpdated() == 0 ||
> !meta.isExpired(ttlTimeProvider.getMetadataTtl(),
> ttlTimeProvider.getNow())) {
>   metas.add(meta);
> }
> {code}
> we will filter out expired entries from the listing, so we can recover form 
> these kind of OOB ops.
> Note:  we have to handle the lastUpdated == 0 case, where the lastUpdated 
> field is not filled in!
> Note: this can only be fixed cleanly after HADOOP-16383 is fixed because we 
> need to have the TTLtimeProvider in MS to handle this internally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1134: HADOOP-16433. S3Guard: Filter expired entries and tombstones when lis…

2019-07-24 Thread GitBox
steveloughran commented on issue #1134: HADOOP-16433. S3Guard: Filter expired 
entries and tombstones when lis…
URL: https://github.com/apache/hadoop/pull/1134#issuecomment-514719851
 
 
   committed to trunk; thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #1134: HADOOP-16433. S3Guard: Filter expired entries and tombstones when lis…

2019-07-24 Thread GitBox
steveloughran closed pull request #1134: HADOOP-16433. S3Guard: Filter expired 
entries and tombstones when lis…
URL: https://github.com/apache/hadoop/pull/1134
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1134: HADOOP-16433. S3Guard: Filter expired entries and tombstones when lis…

2019-07-24 Thread GitBox
steveloughran commented on issue #1134: HADOOP-16433. S3Guard: Filter expired 
entries and tombstones when lis…
URL: https://github.com/apache/hadoop/pull/1134#issuecomment-514716965
 
 
   LGTM
   
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16448) Connection to Hadoop homepage is not secure

2019-07-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891997#comment-16891997
 ] 

Hadoop QA commented on HADOOP-16448:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-16448 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-16448 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16418/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Connection to Hadoop homepage is not secure
> ---
>
> Key: HADOOP-16448
> URL: https://issues.apache.org/jira/browse/HADOOP-16448
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.23.11, 2.4.0, 2.5.0, 2.4.1, 2.5.1, 2.5.2, 2.6.0, 
> 2.6.1, 2.7.0, 2.8.0, 2.7.1, 2.7.2, 2.6.2, 2.6.3, 2.7.3, 2.9.0, 2.6.4, 2.6.5, 
> 2.7.4, 2.8.1, 2.8.2, 2.8.3, 2.7.5, 3.0.0, 3.1.0, 2.9.1, 3.0.1, 2.8.4, 2.7.6, 
> 3.2.0, 3.0.2, 3.1.1, 2.9.2, 3.0.3, 2.7.7, 2.8.5
>Reporter: Kaspar Tint
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Attachments: Screen Shot 2019-07-23 at 9.37.54 AM.png
>
>
> When visiting the [Hadoop 
> website|https://hadoop.apache.org/docs/r3.2.0/index.html] with the latest 
> Firefox browser (v 68.0.1) it appears that the website cannot be reached 
> through secure means by default.
> The culprit seems to be the fact that the two header images presented on the 
> page are loaded in via *HTTP*
>  !Screen Shot 2019-07-23 at 9.37.54 AM.png!.
> These images are located in the respective locations:
> http://hadoop.apache.org/images/hadoop-logo.jpg
> http://www.apache.org/images/asf_logo_wide.png
> These images can be reached also from the following locations:
> https://hadoop.apache.org/images/hadoop-logo.jpg
> https://www.apache.org/images/asf_logo_wide.png
> As one can see, a fix could be made to use a more safe way of including in 
> the two header pictures to the page.
> I feel like I am in danger when reading the Hadoop documentation from the 
> official Hadoop webpage in a non secure way. Thus I felt the need to open 
> this ticket and raise the issue in order to have a future where everyone can 
> learn from Hadoop documentation in a safe and secure way.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16457) Hadoop does not work without Kerberos for simple security

2019-07-24 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891972#comment-16891972
 ] 

Eric Yang edited comment on HADOOP-16457 at 7/24/19 4:20 PM:
-

[~Prabhu Joseph] Sorry, I missed the logic during the review process, DFSUtils 
loadSslConfiguration should check if simple security and StaticUserWebFilter 
are in use.  Otherwise, it will prevent user from setting up a simple security 
cluster.


was (Author: eyang):
[~Prabhu Joseph] Sorry, I missed the logic during the review process, DFSUtils 
loadSslConfiguration should check if simple security is in use.  Otherwise, it 
will prevent user from setting up a simple security cluster.

> Hadoop does not work without Kerberos for simple security
> -
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16457) Hadoop does not work without Kerberos for simple security

2019-07-24 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891972#comment-16891972
 ] 

Eric Yang commented on HADOOP-16457:


[~Prabhu Joseph] Sorry, I missed the logic during the review process, DFSUtils 
loadSslConfiguration should check if simple security is in use.  Otherwise, it 
will prevent user from setting up a simple security cluster.

> Hadoop does not work without Kerberos for simple security
> -
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16457) Hadoop does not work without Kerberos for simple security

2019-07-24 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16457:
---
Description: 
When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
is still setup.  This prevents datanode to talk to namenode.

Error message in namenode logs:
{code}
2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
initializers set : 
org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
2019-07-24 16:06:26,212 WARN 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
accessible by dn/eyang-5.openstacklo...@example.com
{code}

Errors in datanode log:
{code}
2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
{code}

The logic in HADOOP-16354 always added AuthFilter regardless security is 
enabled or not.  This is incorrect.  When simple security is chosen and using 
StaticUserWebFilter.  AutheFilter check should not be required for datanode to 
communicate with namenode.

  was:
When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
is still setup.  This prevents datanode to talk to namenode.

Error message in namenode logs:
{code}
2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
initializers set : 
org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
2019-07-24 16:06:26,212 WARN 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
accessible by dn/eyang-5.openstacklo...@example.com
{code}

Errors in datanode log:
{code}
2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
{code}

The logic in HADOOP-16354 always added AuthFilter regardless which http filter 
initializer is chosen.  This is wrong.


> Hadoop does not work without Kerberos for simple security
> -
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16457) Hadoop does not work without Kerberos for simple security

2019-07-24 Thread Eric Yang (JIRA)
Eric Yang created HADOOP-16457:
--

 Summary: Hadoop does not work without Kerberos for simple security
 Key: HADOOP-16457
 URL: https://issues.apache.org/jira/browse/HADOOP-16457
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.3.0
Reporter: Eric Yang
Assignee: Prabhu Joseph


When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
is still setup.  This prevents datanode to talk to namenode.

Error message in namenode logs:
{code}
2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
initializers set : 
org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
2019-07-24 16:06:26,212 WARN 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
accessible by dn/eyang-5.openstacklo...@example.com
{code}

Errors in datanode log:
{code}
2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
{code}

The logic in HADOOP-16354 always added AuthFilter regardless which http filter 
initializer is chosen.  This is wrong.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1134: HADOOP-16433. S3Guard: Filter expired entries and tombstones when lis…

2019-07-24 Thread GitBox
hadoop-yetus commented on issue #1134: HADOOP-16433. S3Guard: Filter expired 
entries and tombstones when lis…
URL: https://github.com/apache/hadoop/pull/1134#issuecomment-514696244
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1134 | trunk passed |
   | +1 | compile | 33 | trunk passed |
   | +1 | checkstyle | 21 | trunk passed |
   | +1 | mvnsite | 37 | trunk passed |
   | +1 | shadedclient | 689 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 27 | trunk passed |
   | 0 | spotbugs | 61 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 59 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 34 | the patch passed |
   | +1 | compile | 30 | the patch passed |
   | +1 | javac | 30 | the patch passed |
   | -0 | checkstyle | 18 | hadoop-tools/hadoop-aws: The patch generated 1 new 
+ 26 unchanged - 0 fixed = 27 total (was 26) |
   | +1 | mvnsite | 35 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 742 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 23 | the patch passed |
   | -1 | findbugs | 74 | hadoop-tools/hadoop-aws generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 283 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 3402 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.ttlTimeProvider; locked 75% 
of time  Unsynchronized access at LocalMetadataStore.java:75% of time  
Unsynchronized access at LocalMetadataStore.java:[line 623] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1134/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1134 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 09333099e4fc 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cf9ff08 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1134/7/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1134/7/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1134/7/testReport/ |
   | Max. process+thread count | 447 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1134/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #1134: HADOOP-16433. S3Guard: Filter expired entries and tombstones when lis…

2019-07-24 Thread GitBox
bgaborg commented on issue #1134: HADOOP-16433. S3Guard: Filter expired entries 
and tombstones when lis…
URL: https://github.com/apache/hadoop/pull/1134#issuecomment-514694409
 
 
   Thanks for the review @steveloughran, the pr is up to date with those 
changes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1134: HADOOP-16433. S3Guard: Filter expired entries and tombstones when lis…

2019-07-24 Thread GitBox
steveloughran commented on issue #1134: HADOOP-16433. S3Guard: Filter expired 
entries and tombstones when lis…
URL: https://github.com/apache/hadoop/pull/1134#issuecomment-514683996
 
 
   OK, +1 pending those changes
   
   FWIW, I do think that the iterator code may actually be thread-safe, it's 
just really hard to parse the javadocs and I do have some distant memories of 
something like this raising a ConcurrentModificationException for me once ... 
but those javadocs say "no"; so maybe it was the classic HashTable or something 
I got burned by.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #1134: HADOOP-16433. S3Guard: Filter expired entries and tombstones when lis…

2019-07-24 Thread GitBox
bgaborg commented on issue #1134: HADOOP-16433. S3Guard: Filter expired entries 
and tombstones when lis…
URL: https://github.com/apache/hadoop/pull/1134#issuecomment-514676291
 
 
   I'll add synchronized to 
`DirListingMetadata.removeExpiredEntriesFromListing()`. It's a little method so 
I would not bother with more sophisticated locking than that.
   
   I'll correct the 
   > PathMetadata.toString() shows the lastUpdated state, DDBPathMetadata 
should cut it.
   nit
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on a change in pull request #1151: HDDS-1853. Fix failing blockade test-cases.

2019-07-24 Thread GitBox
nandakumar131 commented on a change in pull request #1151: HDDS-1853. Fix 
failing blockade test-cases.
URL: https://github.com/apache/hadoop/pull/1151#discussion_r306868976
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozoneblockade/docker-config
 ##
 @@ -32,9 +32,9 @@ OZONE-SITE.XML_ozone.scm.pipeline.owner.container.count=1
 OZONE-SITE.XML_ozone.scm.pipeline.destroy.timeout=15s
 OZONE-SITE.XML_hdds.heartbeat.interval=2s
 OZONE-SITE.XML_hdds.scm.wait.time.after.safemode.exit=30s
-OZONE-SITE.XML_hdds.scm.replication.thread.interval=5s
-OZONE-SITE.XML_hdds.scm.replication.event.timeout=7s
-OZONE-SITE.XML_dfs.ratis.server.failure.duration=25s
+OZONE-SITE.XML_hdds.scm.replication.thread.interval=6s
 
 Review comment:
   In some test-cases we want to write to a pipeline where there are only two 
nodes in the ring (we cut the connection to the third node). In such cases SCM 
was destroying the pipeline immediately and the replication manager kicks-in. 
These values are adjusted such that the client gets enough time to write to a 
pipeline where there are only two healthy nodes and verify the writes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16435) RpcMetrics should not be retained forever

2019-07-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891936#comment-16891936
 ] 

Hadoop QA commented on HADOOP-16435:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HADOOP-16435 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-16435 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16417/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RpcMetrics should not be retained forever
> -
>
> Key: HADOOP-16435
> URL: https://issues.apache.org/jira/browse/HADOOP-16435
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: rpc-server
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Critical
> Attachments: HADOOP-16435.01.patch, classes.png, 
> defaultMetricsHoldsRpcMetrics.png, related.jxray.png, rpcm.hprof.xz
>
>
> * RpcMetrics objects are registered into 
> [defaultmetricssystem|https://github.com/apache/hadoop/blob/85d9111a88f94a5e6833cd142272be2c5823e922/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java#L101]
> * although there is a shutdown() call (which is actually invoked) it doesn't 
> unregisters itself from the 
> [metricssystem|https://github.com/apache/hadoop/blob/85d9111a88f94a5e6833cd142272be2c5823e922/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java#L185]
> * RpcDetailedMetrics has the same issue
> background
> * hiveserver2 slowly eats up memory when running simple queries in new 
> sessions (select 1)
> * every session opens a tezsession
> * tezsession has rpcmetrics
> * with a 150M heap after around 30 session the jvm gets outofmemmory



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukul1987 commented on a change in pull request #1151: HDDS-1853. Fix failing blockade test-cases.

2019-07-24 Thread GitBox
mukul1987 commented on a change in pull request #1151: HDDS-1853. Fix failing 
blockade test-cases.
URL: https://github.com/apache/hadoop/pull/1151#discussion_r306863901
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozoneblockade/docker-config
 ##
 @@ -32,9 +32,9 @@ OZONE-SITE.XML_ozone.scm.pipeline.owner.container.count=1
 OZONE-SITE.XML_ozone.scm.pipeline.destroy.timeout=15s
 OZONE-SITE.XML_hdds.heartbeat.interval=2s
 OZONE-SITE.XML_hdds.scm.wait.time.after.safemode.exit=30s
-OZONE-SITE.XML_hdds.scm.replication.thread.interval=5s
-OZONE-SITE.XML_hdds.scm.replication.event.timeout=7s
-OZONE-SITE.XML_dfs.ratis.server.failure.duration=25s
+OZONE-SITE.XML_hdds.scm.replication.thread.interval=6s
 
 Review comment:
   Can you please explain the need of increasing these values ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1134: HADOOP-16433. S3Guard: Filter expired entries and tombstones when lis…

2019-07-24 Thread GitBox
steveloughran commented on issue #1134: HADOOP-16433. S3Guard: Filter expired 
entries and tombstones when lis…
URL: https://github.com/apache/hadoop/pull/1134#issuecomment-514670105
 
 
   running the tests too (non scale)
   
   One thing I want to be sure about is: does 
`DirListingMetadata.removeExpiredEntriesFromListing()` work if invoked in one 
thread while another is already doing it. I believe so, because it's using a 
ConcurrentHashMap and the javaadocs for that seem to imply it. Do you also 
believe this to be true?
   
   
   nit: now that `PathMetadata.toString()` shows the lastUpdated state, 
`DDBPathMetadata` should cut it.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16245) Enabling SSL within LdapGroupsMapping can break system SSL configs

2019-07-24 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891927#comment-16891927
 ] 

Erik Krogen commented on HADOOP-16245:
--

Attached v004 patch to address checkstyle/whitespace issues.

> Enabling SSL within LdapGroupsMapping can break system SSL configs
> --
>
> Key: HADOOP-16245
> URL: https://issues.apache.org/jira/browse/HADOOP-16245
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 2.9.1, 2.8.4, 2.7.6, 3.1.1, 3.0.3
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16245.000.patch, HADOOP-16245.001.patch, 
> HADOOP-16245.002.patch, HADOOP-16245.003.patch, HADOOP-16245.004.patch
>
>
> When debugging an issue where one of our server components was unable to 
> communicate with other components via SSL, we realized that LdapGroupsMapping 
> sets its SSL configurations globally, rather than scoping them to the HTTP 
> clients it creates.
> {code:title=LdapGroupsMapping}
>   DirContext getDirContext() throws NamingException {
> if (ctx == null) {
>   // Set up the initial environment for LDAP connectivity
>   Hashtable env = new Hashtable();
>   env.put(Context.INITIAL_CONTEXT_FACTORY,
>   com.sun.jndi.ldap.LdapCtxFactory.class.getName());
>   env.put(Context.PROVIDER_URL, ldapUrl);
>   env.put(Context.SECURITY_AUTHENTICATION, "simple");
>   // Set up SSL security, if necessary
>   if (useSsl) {
> env.put(Context.SECURITY_PROTOCOL, "ssl");
> if (!keystore.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStore", keystore);
> }
> if (!keystorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStorePassword", keystorePass);
> }
> if (!truststore.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStore", truststore);
> }
> if (!truststorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStorePassword",
>   truststorePass);
> }
>   }
>   env.put(Context.SECURITY_PRINCIPAL, bindUser);
>   env.put(Context.SECURITY_CREDENTIALS, bindPassword);
>   env.put("com.sun.jndi.ldap.connect.timeout", 
> conf.get(CONNECTION_TIMEOUT,
>   String.valueOf(CONNECTION_TIMEOUT_DEFAULT)));
>   env.put("com.sun.jndi.ldap.read.timeout", conf.get(READ_TIMEOUT,
>   String.valueOf(READ_TIMEOUT_DEFAULT)));
>   ctx = new InitialDirContext(env);
> }
> {code}
> Notice the {{System.setProperty()}} calls, which will change settings 
> JVM-wide. This causes issues for other SSL clients, which may rely on the 
> default JVM truststore being used. This behavior was initially introduced by 
> HADOOP-8121, and extended to include the truststore configurations in 
> HADOOP-12862.
> The correct approach is to use a mechanism which is scoped to the LDAP 
> requests only. The right approach appears to be to use the 
> {{java.naming.ldap.factory.socket}} parameter to set the socket factory to a 
> custom SSL socket factory which correctly sets the key and trust store 
> parameters. See an example [here|https://stackoverflow.com/a/4615497/4979203].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16245) Enabling SSL within LdapGroupsMapping can break system SSL configs

2019-07-24 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-16245:
-
Attachment: HADOOP-16245.004.patch

> Enabling SSL within LdapGroupsMapping can break system SSL configs
> --
>
> Key: HADOOP-16245
> URL: https://issues.apache.org/jira/browse/HADOOP-16245
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 2.9.1, 2.8.4, 2.7.6, 3.1.1, 3.0.3
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16245.000.patch, HADOOP-16245.001.patch, 
> HADOOP-16245.002.patch, HADOOP-16245.003.patch, HADOOP-16245.004.patch
>
>
> When debugging an issue where one of our server components was unable to 
> communicate with other components via SSL, we realized that LdapGroupsMapping 
> sets its SSL configurations globally, rather than scoping them to the HTTP 
> clients it creates.
> {code:title=LdapGroupsMapping}
>   DirContext getDirContext() throws NamingException {
> if (ctx == null) {
>   // Set up the initial environment for LDAP connectivity
>   Hashtable env = new Hashtable();
>   env.put(Context.INITIAL_CONTEXT_FACTORY,
>   com.sun.jndi.ldap.LdapCtxFactory.class.getName());
>   env.put(Context.PROVIDER_URL, ldapUrl);
>   env.put(Context.SECURITY_AUTHENTICATION, "simple");
>   // Set up SSL security, if necessary
>   if (useSsl) {
> env.put(Context.SECURITY_PROTOCOL, "ssl");
> if (!keystore.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStore", keystore);
> }
> if (!keystorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStorePassword", keystorePass);
> }
> if (!truststore.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStore", truststore);
> }
> if (!truststorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStorePassword",
>   truststorePass);
> }
>   }
>   env.put(Context.SECURITY_PRINCIPAL, bindUser);
>   env.put(Context.SECURITY_CREDENTIALS, bindPassword);
>   env.put("com.sun.jndi.ldap.connect.timeout", 
> conf.get(CONNECTION_TIMEOUT,
>   String.valueOf(CONNECTION_TIMEOUT_DEFAULT)));
>   env.put("com.sun.jndi.ldap.read.timeout", conf.get(READ_TIMEOUT,
>   String.valueOf(READ_TIMEOUT_DEFAULT)));
>   ctx = new InitialDirContext(env);
> }
> {code}
> Notice the {{System.setProperty()}} calls, which will change settings 
> JVM-wide. This causes issues for other SSL clients, which may rely on the 
> default JVM truststore being used. This behavior was initially introduced by 
> HADOOP-8121, and extended to include the truststore configurations in 
> HADOOP-12862.
> The correct approach is to use a mechanism which is scoped to the LDAP 
> requests only. The right approach appears to be to use the 
> {{java.naming.ldap.factory.socket}} parameter to set the socket factory to a 
> custom SSL socket factory which correctly sets the key and trust store 
> parameters. See an example [here|https://stackoverflow.com/a/4615497/4979203].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16449) Allow an empty credential provider chain, separate chains for S3 and DDB

2019-07-24 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891923#comment-16891923
 ] 

Steve Loughran commented on HADOOP-16449:
-

I have comments, but first can you submit it as a github PR, and state which 
s3a endpoint you ran against

Key point: DDB shouldn't contain any of the logic about how to get its auth 
chain. Thats something which should come to it via the StoreContext used to 
help bind it to the rest of the S3A connector.

> Allow an empty credential provider chain, separate chains for S3 and DDB
> 
>
> Key: HADOOP-16449
> URL: https://issues.apache.org/jira/browse/HADOOP-16449
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
>Priority: Major
> Attachments: HADOOP-16449.01.patch
>
>
> Currently, credentials cannot be empty (falls back to using the default 
> chain). Credentials for S3 and DDB are always the same.
> In some cases it can be useful to use a different credential chain for S3 and 
> DDB, as well as allow for an empty credential chain.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16435) RpcMetrics should not be retained forever

2019-07-24 Thread Zoltan Haindrich (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891922#comment-16891922
 ] 

Zoltan Haindrich edited comment on HADOOP-16435 at 7/24/19 2:54 PM:


[~Jack-Lee] yes, it will be unregistered when the server is stopped - if I 
don't fully understand your concern; could you be more specific?


was (Author: kgyrtkirk):
[~Jack-Lee] yes, it will be unregistered when the server is stopped - if I 
don't fully understand your concern; could you elaborate?

> RpcMetrics should not be retained forever
> -
>
> Key: HADOOP-16435
> URL: https://issues.apache.org/jira/browse/HADOOP-16435
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: rpc-server
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Critical
> Attachments: HADOOP-16435.01.patch, classes.png, 
> defaultMetricsHoldsRpcMetrics.png, related.jxray.png, rpcm.hprof.xz
>
>
> * RpcMetrics objects are registered into 
> [defaultmetricssystem|https://github.com/apache/hadoop/blob/85d9111a88f94a5e6833cd142272be2c5823e922/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java#L101]
> * although there is a shutdown() call (which is actually invoked) it doesn't 
> unregisters itself from the 
> [metricssystem|https://github.com/apache/hadoop/blob/85d9111a88f94a5e6833cd142272be2c5823e922/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java#L185]
> * RpcDetailedMetrics has the same issue
> background
> * hiveserver2 slowly eats up memory when running simple queries in new 
> sessions (select 1)
> * every session opens a tezsession
> * tezsession has rpcmetrics
> * with a 150M heap after around 30 session the jvm gets outofmemmory



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16435) RpcMetrics should not be retained forever

2019-07-24 Thread Zoltan Haindrich (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891922#comment-16891922
 ] 

Zoltan Haindrich commented on HADOOP-16435:
---

[~Jack-Lee] yes, it will be unregistered when the server is stopped - if I 
don't fully understand your concern; could you elaborate?

> RpcMetrics should not be retained forever
> -
>
> Key: HADOOP-16435
> URL: https://issues.apache.org/jira/browse/HADOOP-16435
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: rpc-server
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Critical
> Attachments: HADOOP-16435.01.patch, classes.png, 
> defaultMetricsHoldsRpcMetrics.png, related.jxray.png, rpcm.hprof.xz
>
>
> * RpcMetrics objects are registered into 
> [defaultmetricssystem|https://github.com/apache/hadoop/blob/85d9111a88f94a5e6833cd142272be2c5823e922/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java#L101]
> * although there is a shutdown() call (which is actually invoked) it doesn't 
> unregisters itself from the 
> [metricssystem|https://github.com/apache/hadoop/blob/85d9111a88f94a5e6833cd142272be2c5823e922/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java#L185]
> * RpcDetailedMetrics has the same issue
> background
> * hiveserver2 slowly eats up memory when running simple queries in new 
> sessions (select 1)
> * every session opens a tezsession
> * tezsession has rpcmetrics
> * with a 150M heap after around 30 session the jvm gets outofmemmory



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16456) Refactor the S3A codebase into a more maintainable and testable form

2019-07-24 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-16456:
---

Assignee: Steve Loughran

> Refactor the S3A codebase into a more maintainable and testable form
> 
>
> Key: HADOOP-16456
> URL: https://issues.apache.org/jira/browse/HADOOP-16456
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> The S3A Codebase has got too complex to be maintained. In particular,
> * the lack of layering in the S3AFileSystem class means that all 
> subcomponents (delegation, dynamo db, block outputstream etc) all get given a 
> back reference and make arbitrary calls in to it.
> * We can't test in isolation, and while integration tests are the most 
> rigorous testing we can have, they are slow, hard to inject failures into and 
> do not work on isolated parts of code
> * The code within the S3A FileSystem calls the toplevel API calls internally, 
> so mixing public interface with the implementation details
> * We are adding context through S3Guard calls for: consistency, performance 
> and recovery; we can't do that without a clean split between that public API 
> and the internals
> Proposed: 
> # we carefully break up the S3AFileSystem into a layered design
> # with a "StoreContext" to bind components of the connector to it
> # and some form of operation context to be passed in with each request to 
> represent the active operation and its state (including that for S3Guard 
> BulkOperations)
> See [refactoring 
> S3A|https://github.com/steveloughran/engineering-proposals/blob/master/refactoring-s3a.md]
> I've already started using some of this design in the HADOOP-15183 component, 
> for the addition of those S3Guard bulk operations, and to add a medium-life 
> "RenameOperation". The proposal document reviews that experience and 
> discusses improvements.
> As noted: this needs to be done with care. We still need to maintain the 
> existing codebase; the more radically we change the code not only do we 
> increase the risk of the changes being wrong, we make backporting that much 
> harder. But we can't sustain the current design



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16456) Refactor the S3A codebase into a more maintainable and testable form

2019-07-24 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16456:
---

 Summary: Refactor the S3A codebase into a more maintainable and 
testable form
 Key: HADOOP-16456
 URL: https://issues.apache.org/jira/browse/HADOOP-16456
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran


The S3A Codebase has got too complex to be maintained. In particular,

* the lack of layering in the S3AFileSystem class means that all subcomponents 
(delegation, dynamo db, block outputstream etc) all get given a back reference 
and make arbitrary calls in to it.
* We can't test in isolation, and while integration tests are the most rigorous 
testing we can have, they are slow, hard to inject failures into and do not 
work on isolated parts of code
* The code within the S3A FileSystem calls the toplevel API calls internally, 
so mixing public interface with the implementation details
* We are adding context through S3Guard calls for: consistency, performance and 
recovery; we can't do that without a clean split between that public API and 
the internals

Proposed: 

# we carefully break up the S3AFileSystem into a layered design
# with a "StoreContext" to bind components of the connector to it
# and some form of operation context to be passed in with each request to 
represent the active operation and its state (including that for S3Guard 
BulkOperations)


See [refactoring 
S3A|https://github.com/steveloughran/engineering-proposals/blob/master/refactoring-s3a.md]

I've already started using some of this design in the HADOOP-15183 component, 
for the addition of those S3Guard bulk operations, and to add a medium-life 
"RenameOperation". The proposal document reviews that experience and discusses 
improvements.

As noted: this needs to be done with care. We still need to maintain the 
existing codebase; the more radically we change the code not only do we 
increase the risk of the changes being wrong, we make backporting that much 
harder. But we can't sustain the current design





--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16435) RpcMetrics should not be retained forever

2019-07-24 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891915#comment-16891915
 ] 

lqjacklee commented on HADOOP-16435:


[~kgyrtkirk] org.apache.hadoop.ipc.metrics.RpcMetrics#shutdown will be called 
when the server is stopped. so I wonder whether the solution is ok? 

> RpcMetrics should not be retained forever
> -
>
> Key: HADOOP-16435
> URL: https://issues.apache.org/jira/browse/HADOOP-16435
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: rpc-server
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Critical
> Attachments: HADOOP-16435.01.patch, classes.png, 
> defaultMetricsHoldsRpcMetrics.png, related.jxray.png, rpcm.hprof.xz
>
>
> * RpcMetrics objects are registered into 
> [defaultmetricssystem|https://github.com/apache/hadoop/blob/85d9111a88f94a5e6833cd142272be2c5823e922/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java#L101]
> * although there is a shutdown() call (which is actually invoked) it doesn't 
> unregisters itself from the 
> [metricssystem|https://github.com/apache/hadoop/blob/85d9111a88f94a5e6833cd142272be2c5823e922/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java#L185]
> * RpcDetailedMetrics has the same issue
> background
> * hiveserver2 slowly eats up memory when running simple queries in new 
> sessions (select 1)
> * every session opens a tezsession
> * tezsession has rpcmetrics
> * with a 150M heap after around 30 session the jvm gets outofmemmory



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 opened a new pull request #1151: HDDS-1853. Fix failing blockade test-cases.

2019-07-24 Thread GitBox
nandakumar131 opened a new pull request #1151: HDDS-1853. Fix failing blockade 
test-cases.
URL: https://github.com/apache/hadoop/pull/1151
 
 
   This PR is to fix and make sure that all the test-cases in blockade are 
working.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1150: HDDS-1816: ContainerStateMachine should limit number of pending apply transactions

2019-07-24 Thread GitBox
hadoop-yetus commented on issue #1150: HDDS-1816: ContainerStateMachine should 
limit number of pending apply transactions
URL: https://github.com/apache/hadoop/pull/1150#issuecomment-514635063
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for branch |
   | +1 | mvninstall | 579 | trunk passed |
   | +1 | compile | 355 | trunk passed |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 784 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 142 | trunk passed |
   | 0 | spotbugs | 413 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 605 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 17 | Maven dependency ordering for patch |
   | +1 | mvninstall | 551 | the patch passed |
   | +1 | compile | 366 | the patch passed |
   | +1 | javac | 366 | the patch passed |
   | -0 | checkstyle | 37 | hadoop-hdds: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 739 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 144 | the patch passed |
   | +1 | findbugs | 619 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 284 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1541 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 7137 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.TestOzoneConfigurationFields |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1150/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1150 |
   | JIRA Issue | HDDS-1816 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2fa951e5a4a6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cf9ff08 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1150/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1150/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1150/1/testReport/ |
   | Max. process+thread count | 5292 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service U: 
hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1150/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, 

[jira] [Updated] (HADOOP-16448) Connection to Hadoop homepage is not secure

2019-07-24 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HADOOP-16448:

Status: Patch Available  (was: Open)

> Connection to Hadoop homepage is not secure
> ---
>
> Key: HADOOP-16448
> URL: https://issues.apache.org/jira/browse/HADOOP-16448
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: website
>Affects Versions: 2.8.5, 2.7.7, 3.0.3, 2.9.2, 3.1.1, 3.0.2, 3.2.0, 2.7.6, 
> 2.8.4, 3.0.1, 2.9.1, 3.1.0, 3.0.0, 2.7.5, 2.8.3, 2.8.2, 2.8.1, 2.7.4, 2.6.5, 
> 2.6.4, 2.9.0, 2.7.3, 2.6.3, 2.6.2, 2.7.2, 2.7.1, 2.8.0, 2.7.0, 2.6.1, 2.6.0, 
> 2.5.2, 2.5.1, 2.4.1, 2.5.0, 2.4.0, 0.23.11
>Reporter: Kaspar Tint
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Attachments: Screen Shot 2019-07-23 at 9.37.54 AM.png
>
>
> When visiting the [Hadoop 
> website|https://hadoop.apache.org/docs/r3.2.0/index.html] with the latest 
> Firefox browser (v 68.0.1) it appears that the website cannot be reached 
> through secure means by default.
> The culprit seems to be the fact that the two header images presented on the 
> page are loaded in via *HTTP*
>  !Screen Shot 2019-07-23 at 9.37.54 AM.png!.
> These images are located in the respective locations:
> http://hadoop.apache.org/images/hadoop-logo.jpg
> http://www.apache.org/images/asf_logo_wide.png
> These images can be reached also from the following locations:
> https://hadoop.apache.org/images/hadoop-logo.jpg
> https://www.apache.org/images/asf_logo_wide.png
> As one can see, a fix could be made to use a more safe way of including in 
> the two header pictures to the page.
> I feel like I am in danger when reading the Hadoop documentation from the 
> official Hadoop webpage in a non secure way. Thus I felt the need to open 
> this ticket and raise the issue in order to have a future where everyone can 
> learn from Hadoop documentation in a safe and secure way.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16062) Add ability to disable Configuration reload registry

2019-07-24 Thread Zoltan Haindrich (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891822#comment-16891822
 ] 

Zoltan Haindrich commented on HADOOP-16062:
---

[~aajisaka] could you please take a look?

note: it seems to me that the {{Configuration.reloadExistingConfigurations}} 
functionality is only used to aid some interesting way to "deprecate" some 
config keys
while keeping the deprecation disconnected from the main config keys section; I 
think this could be probably done differently:

* by making sure that this "deprecation" call is either inside 
{{org.apache.hadoop.fs.FileSystem}} implementations (or its imported)
* make sure that before constructing any "configuration" objects the 
serviceloader is invoked - to ensure fully loading all FileSystem 
implementation; which in turn will run all the static blocks
* what this is missing:  probably for runtime loaded jars which contain 
filesystem implementations this will not work



> Add ability to disable Configuration reload registry
> 
>
> Key: HADOOP-16062
> URL: https://issues.apache.org/jira/browse/HADOOP-16062
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HADOOP-16062.01.patch, jxray_registry.png
>
>
> In Hive we see that after an extensive amount of usage there are a lot of 
> Configuration objects not fully reclaimed because of Configuration's REGISTRY 
> weak hashmap.
> https://github.com/apache/hadoop/blob/abde1e1f58d5b699e4b8e460cff68e154738169b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L321
>  !jxray_registry.png! 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16062) Add ability to disable Configuration reload registry

2019-07-24 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich reassigned HADOOP-16062:
-

Assignee: Zoltan Haindrich

> Add ability to disable Configuration reload registry
> 
>
> Key: HADOOP-16062
> URL: https://issues.apache.org/jira/browse/HADOOP-16062
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HADOOP-16062.01.patch, jxray_registry.png
>
>
> In Hive we see that after an extensive amount of usage there are a lot of 
> Configuration objects not fully reclaimed because of Configuration's REGISTRY 
> weak hashmap.
> https://github.com/apache/hadoop/blob/abde1e1f58d5b699e4b8e460cff68e154738169b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L321
>  !jxray_registry.png! 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #1134: HADOOP-16433. S3Guard: Filter expired entries and tombstones when lis…

2019-07-24 Thread GitBox
bgaborg commented on issue #1134: HADOOP-16433. S3Guard: Filter expired entries 
and tombstones when lis…
URL: https://github.com/apache/hadoop/pull/1134#issuecomment-514602153
 
 
   rebased, fixed based on @steveloughran's review.
   Tested against ireland, local and dynamo. No unknown issues.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >