Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-10-02 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/463/

[Oct 2, 2019 11:51:59 PM] (cliang) HDFS-14858. [SBN read] Allow configurably 
enable/disable

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

[jira] [Resolved] (HADOOP-16599) Allow a SignerInitializer to be specified along with a Custom Signer

2019-10-02 Thread Siddharth Seth (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth resolved HADOOP-16599.
-
Fix Version/s: 3.3.0
   Resolution: Fixed

> Allow a SignerInitializer to be specified along with a Custom Signer
> 
>
> Key: HADOOP-16599
> URL: https://issues.apache.org/jira/browse/HADOOP-16599
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
>Priority: Major
> Fix For: 3.3.0
>
>
> HADOOP-16445 added support for custom signers. This is a follow up to allow 
> for an Initializer to be specified along with the Custom Signer, for any 
> initialization etc that is required by the custom signer specified.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16626) S3A ITestRestrictedReadAccess fails

2019-10-02 Thread Siddharth Seth (Jira)
Siddharth Seth created HADOOP-16626:
---

 Summary: S3A ITestRestrictedReadAccess fails
 Key: HADOOP-16626
 URL: https://issues.apache.org/jira/browse/HADOOP-16626
 Project: Hadoop Common
  Issue Type: Test
  Components: fs/s3
Reporter: Siddharth Seth


Just tried running the S3A test suite. Consistently seeing the following.
Command used 
{code}
mvn -T 1C  verify -Dparallel-tests -DtestsThreadCount=12 -Ds3guard -Dauth 
-Ddynamo -Dtest=moo -Dit.test=ITestRestrictedReadAccess
{code}

cc [~ste...@apache.org]

{code}
---
Test set: org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess
---
Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 5.335 s <<< 
FAILURE! - in org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess
testNoReadAccess[raw](org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess)  
Time elapsed: 2.841 s  <<< ERROR!
java.nio.file.AccessDeniedException: 
test/testNoReadAccess-raw/noReadDir/emptyDir/: getFileStatus on 
test/testNoReadAccess-raw/noReadDir/emptyDir/: 
com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: FE8B4D6F25648BCD; 
S3 Extended Request ID: 
hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=), 
S3 Extended Request ID: 
hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=:403
 Forbidden
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:244)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2777)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2705)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2589)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2377)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2356)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:2356)
at 
org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.checkBasicFileOperations(ITestRestrictedReadAccess.java:360)
at 
org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.testNoReadAccess(ITestRestrictedReadAccess.java:282)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden 
(Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 
FE8B4D6F25648BCD; S3 Extended Request ID: 
hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=), 
S3 Extended Request ID: 
hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726)
at 

[jira] [Created] (HADOOP-16625) Backport HADOOP-14624 to branch-3.1

2019-10-02 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HADOOP-16625:


 Summary: Backport HADOOP-14624 to branch-3.1
 Key: HADOOP-16625
 URL: https://issues.apache.org/jira/browse/HADOOP-16625
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang


I am trying to bring commits from trunk/branch-3.2 to branch-3.1, but some of 
them do not compile because of the commons-logging to slf4j migration. 

One of the issue is GenericTestUtils.DelayAnswer do not accept slf4j logger API.

Backport HADOOP-14624 to branch-3.1 to make backport easier. It updates the 
DelayAnswer signature, but it's in the test scope, so we're not really breaking 
backward compat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15871) Some input streams does not obey "java.io.InputStream.available" contract

2019-10-02 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15871.
-
Resolution: Won't Fix

> Some input streams does not obey "java.io.InputStream.available" contract 
> --
>
> Key: HADOOP-15871
> URL: https://issues.apache.org/jira/browse/HADOOP-15871
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, fs/s3
>Reporter: Shixiong Zhu
>Priority: Major
>
> E.g,  DFSInputStream  and S3AInputStream return the size of the remaining 
> available bytes, but the javadoc of "available" says it should "Returns an 
> estimate of the number of bytes that can be read (or skipped over) from this 
> input stream *without blocking* by the next invocation of a method for this 
> input stream."
> I understand that some applications may rely on the current behavior. It 
> would be great that there is an interface to document how "available" should 
> be implemented.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15091) S3aUtils.getEncryptionAlgorithm() always logs@Debug "Using SSE-C"

2019-10-02 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15091.
-
Resolution: Duplicate

> S3aUtils.getEncryptionAlgorithm() always logs@Debug "Using SSE-C"
> -
>
> Key: HADOOP-15091
> URL: https://issues.apache.org/jira/browse/HADOOP-15091
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Priority: Trivial
>
> even when you have encryption off or set to sse-kms/aes256, the debug logs 
> print a comment about using SSE-C
> {code}
> 2017-12-05 12:44:33,292 [main] DEBUG s3a.S3AUtils 
> (S3AUtils.java:getEncryptionAlgorithm(1097)) - Using SSE-C with null key
> {code}
> That log statement should be moved to only get printed with SSE-C enabled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14762) S3A warning of obsolete encryption key which is never used

2019-10-02 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-14762.
-
Resolution: Duplicate

> S3A warning of obsolete encryption key which is never used
> --
>
> Key: HADOOP-14762
> URL: https://issues.apache.org/jira/browse/HADOOP-14762
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Priority: Minor
>
> During CLI init, I get warned of using an obsolete file. 
> {code}
> ./hadoop s3guard import  s3a://hwdev-steve-ireland-new
> 2017-08-11 18:08:43,935 DEBUG conf.Configuration: Reloading 3 existing 
> configurations
> 2017-08-11 18:08:45,146 INFO Configuration.deprecation: 
> fs.s3a.server-side-encryption-key is deprecated. Instead, use 
> fs.s3a.server-side-encryption.key
> 2017-08-11 18:08:45,702 INFO s3guard.S3GuardTool: Metadata store 
> DynamoDBMetadataStore{region=eu-west-1, tableName=hwdev-steve-ireland-new} is 
> initialized.
> ./hadoop s3guard import  s3a://hwdev-steve-ireland-new
> 2017-08-11 18:08:43,935 DEBUG conf.Configuration: Reloading 3 existing 
> configurations
> 2017-08-11 18:08:45,146 INFO Configuration.deprecation: 
> fs.s3a.server-side-encryption-key is deprecated. Instead, use 
> fs.s3a.server-side-encryption.key
> 2017-08-11 18:08:45,702 INFO s3guard.S3GuardTool: Metadata store 
> DynamoDBMetadataStore{region=eu-west-1, tableName=hwdev-steve-ireland-new} is 
> initialized.
> ./hadoop s3guard import  s3a://hwdev-steve-ireland-new
> 2017-08-11 18:08:43,935 DEBUG conf.Configuration: Reloading 3 existing 
> configurations
> 2017-08-11 18:08:45,146 INFO Configuration.deprecation: 
> fs.s3a.server-side-encryption-key is deprecated. Instead, use 
> fs.s3a.server-side-encryption.key
> 2017-08-11 18:08:45,702 INFO s3guard.S3GuardTool: Metadata store 
> DynamoDBMetadataStore{region=eu-west-1, tableName=hwdev-steve-ireland-new} is 
> initialized.
> {code}
> I don't have this setting set as far as I can see, not in an XML file nor any 
> jceks file. Maybe its falsely being picked up during the scan for jceks files?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13373) Add S3A implementation of FSMainOperationsBaseTest

2019-10-02 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13373.
-
Fix Version/s: 3.3.0
   Resolution: Fixed

> Add S3A implementation of FSMainOperationsBaseTest
> --
>
> Key: HADOOP-13373
> URL: https://issues.apache.org/jira/browse/HADOOP-13373
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.3.0
>
>
> There's a JUnit 4 test suite, {{FSMainOperationsBaseTest}}, which should be 
> implemented in the s3a tests, to add a bit more test coverage —including for 
> globbing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-16393) S3Guard init command uses global settings, not those of target bucket

2019-10-02 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-16393:
-

still seeing this
```
bin/hadoop s3guard init s3a://hwdev-steve-ireland-new/
2019-10-02 18:42:22,019 [main] DEBUG s3guard.S3GuardTool 
(S3GuardTool.java:run(1758)) - Executing command init
2019-10-02 18:42:22,045 [main] DEBUG s3a.S3AUtils 
(S3AUtils.java:propagateBucketOptions(1137)) - Propagating entries under 
fs.s3a.bucket.hwdev-steve-ireland-new.
2019-10-02 18:42:22,047 [main] DEBUG s3a.S3AUtils 
(S3AUtils.java:propagateBucketOptions(1158)) - Updating fs.s3a.committer.name 
from [core-site.xml]
2019-10-02 18:42:22,048 [main] DEBUG s3a.S3AUtils 
(S3AUtils.java:propagateBucketOptions(1158)) - Updating fs.s3a.endpoint from 
[core-site.xml]
2019-10-02 18:42:22,048 [main] DEBUG s3a.S3AUtils 
(S3AUtils.java:propagateBucketOptions(1158)) - Updating 
fs.s3a.committer.magic.enabled from [core-site.xml]
2019-10-02 18:42:22,048 [main] DEBUG s3a.S3AUtils 
(S3AUtils.java:propagateBucketOptions(1158)) - Updating 
fs.s3a.metadatastore.impl from [core-site.xml]
java.lang.IllegalArgumentException: No DynamoDB table name configured
at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:141)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:484)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.initMetadataStore(S3GuardTool.java:320)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Init.run(S3GuardTool.java:542)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:427)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1797)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:1806)
2019-10-02 18:42:22,071 [main] INFO  util.ExitUtil 
(ExitUtil.java:terminate(210)) - Exiting with status -1: 
java.lang.IllegalArgumentException: No DynamoDB table name configured

```

> S3Guard init command uses global settings, not those of target bucket
> -
>
> Key: HADOOP-16393
> URL: https://issues.apache.org/jira/browse/HADOOP-16393
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
>
> If you call {{s3guard init s3a://name/}} then the custom bucket options of 
> fs.s3a.bucket.name are not picked up, instead the global value is used.
> Fix: take the name of the bucket and use that to eval properties and patch 
> the config used for the init command.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16619) Upgrade jackson and jackson-databind to 2.9.10

2019-10-02 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-16619.

Resolution: Fixed

Set resolution to Fixed

> Upgrade jackson and jackson-databind to 2.9.10
> --
>
> Key: HADOOP-16619
> URL: https://issues.apache.org/jira/browse/HADOOP-16619
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
>
> Two more CVEs (CVE-2019-16335 and CVE-2019-14540) are addressed in 
> jackson-databind 2.9.10.
> For details see Jackson Release 2.9.10 [release 
> notes|https://github.com/FasterXML/jackson/wiki/Jackson-Release-2.9.10].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-16619) Upgrade jackson and jackson-databind to 2.9.10

2019-10-02 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-16619:


> Upgrade jackson and jackson-databind to 2.9.10
> --
>
> Key: HADOOP-16619
> URL: https://issues.apache.org/jira/browse/HADOOP-16619
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
>
> Two more CVEs (CVE-2019-16335 and CVE-2019-14540) are addressed in 
> jackson-databind 2.9.10.
> For details see Jackson Release 2.9.10 [release 
> notes|https://github.com/FasterXML/jackson/wiki/Jackson-Release-2.9.10].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org