[jira] [Resolved] (HADOOP-18468) upgrade jettison json jar due to fix CVE-2022-40149

2022-10-10 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18468.
-
Resolution: Fixed

> upgrade jettison json jar due to fix CVE-2022-40149
> ---
>
> Key: HADOOP-18468
> URL: https://issues.apache.org/jira/browse/HADOOP-18468
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.10.2, 3.2.4, 3.3.4
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.5, 3.3.9
>
>
> A fix for [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-40149]
>  
> [https://github.com/jettison-json/jettison/releases/tag/jettison-1.5.1]
> [https://github.com/advisories/GHSA-56h3-78gp-v83r]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18442) Remove the hadoop-openstack module

2022-10-07 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18442.
-
Fix Version/s: 3.3.5
 Release Note: The swift:// connector for openstack support has been 
removed. It had fundamental problems (swift's handling of files > 4GB). A 
subset of the S3 protocol is now exported by almost all object store services 
-please use that through the s3a connector instead. The hadoop-openstack jar 
remains, only now it is empty of code. This is to ensure that projects which 
declare the JAR a dependency will still have successful builds.  (was: The 
swift:// connector for openstack support has been removed. It had fundamental 
problems (swifts handling of files > 4GB). A subset of the S3 protocol is now 
exported by almost all object store services -please use that through the s3a 
connector instead. The hadoop-openstack jar remains, only now it is empty of 
code. This is to ensure that projects which declare the JAR a dependency will 
still have successful builds.)
   Resolution: Fixed

> Remove the hadoop-openstack module
> --
>
> Key: HADOOP-18442
> URL: https://issues.apache.org/jira/browse/HADOOP-18442
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, fs
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.5
>
>
>  the openstack module doesn't get tested or maintained; it's just something 
> else to keep up to date security wise. As nobody ever files bugs on it it is 
> clearly not being used either.
>  
>  On-prem object stores support the S3 APIs and/or provide their own hadoop 
> connectors (ozone, IBM).
>  
>  Let's just cut it completely. As someone who co-authored a lot of it I am 
> happy to do the duty. I will do a quick review of all test to see if there 
> are any left which we could pull into hadoop common...the FS contract tests 
> were initially derived from the ones I did here.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18469) Add XMLUtils methods to centralise code that creates secure XML parsers

2022-10-07 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18469.
-
Fix Version/s: 3.4.0
   3.3.5
 Assignee: PJ Fanning
   Resolution: Fixed

> Add XMLUtils methods to centralise code that creates secure XML parsers
> ---
>
> Key: HADOOP-18469
> URL: https://issues.apache.org/jira/browse/HADOOP-18469
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.4
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.5
>
>
> Relates to HDFS-16766
> There are other places in the code where DocumentBuilderFactory instances are 
> created that could benefit from the same changes as HDFS-16766



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18378) Implement readFully(long position, byte[] buffer, int offset, int length)

2022-10-06 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18378.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> Implement readFully(long position, byte[] buffer, int offset, int length)
> -
>
> Key: HADOOP-18378
> URL: https://issues.apache.org/jira/browse/HADOOP-18378
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Ahmar Suhail
>Assignee: Alessandro Passaro
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> Implement readFully(long position, byte[] buffer, int offset, int length) in 
> PrefetchingInputStream, as it currently uses FSInputStream's 
> [readFully|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSInputStream.java#L136]
>  which calls read(long position, byte[] buffer, int offset, int length).
> This read then seeks to the position (which is ok), but then seeks back to 
> the original starting position at the end (so always seeking back to 0). this 
> is pretty bad for the prefetching implementation as it means lots of caching 
> to disk and getting blocks from disk. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18382) Upgrade AWS SDK to V2 - Prerequisites

2022-10-05 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18382.
-
Resolution: Fixed

this is in 3.4.0, but i'd like to get it in 3.3.5. 
[~ahmarsu] do you want to see about testing this cherrypicked into branch-3.3. 
and back. 

HADOOP-18481 would be a requirement first though

> Upgrade AWS SDK to V2 - Prerequisites 
> --
>
> Key: HADOOP-18382
> URL: https://issues.apache.org/jira/browse/HADOOP-18382
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmar Suhail
>Assignee: Ahmar Suhail
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> We want to update the AWS SDK to V2, before we do this we should warn on 
> things that will no longer supported. The following changes should be made:
>  
>  * 
> [getAmazonS3Client()|https://github.com/apache/hadoop/blob/221eb2d68d5b52e4394fd36cb30d5ee9ffeea7f0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L1174]
>  - Warn that this method will be removed 
>  * 
> [initCustomSigners()|https://github.com/apache/hadoop/blob/03cfc852791c14fad39db4e5b14104a276c08e59/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/SignerManager.java#L65]
>  - Warn that the interface is changing, any custom signers will need to be 
> updated
>  * 
> [bindAWSClient|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L840]
>  - If DT is enabled, warn that credential providers interface is changing, 
> any custom cred providers used in binding classes will need to be updated
>  *  
> [buildAWSProviderList|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java#L618]
>  - if any SDK V1 cred providers are in this list, warn that these will be 
> removed
>  * 
> [S3ClientFactory|https://github.com/apache/hadoop/blob/221eb2d68d5b52e4394fd36cb30d5ee9ffeea7f0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ClientFactory.java]
>  - Update javadocs to say this interface will be replaced by a V2 client 
> factory, mark interface deprecated?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18481) AWS v2 SDK warning to skip warning of EnvironmentVariableCredentialsProvider

2022-10-05 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18481:
---

 Summary: AWS v2 SDK warning to skip warning of 
EnvironmentVariableCredentialsProvider
 Key: HADOOP-18481
 URL: https://issues.apache.org/jira/browse/HADOOP-18481
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.4.0
Reporter: Steve Loughran
Assignee: Ahmar Suhail



looking at test output with the sdk warnings enabled, it is now always warning 
of a v1 provider reference, even if the user hasn't set any 
fs.s3a.credential.provider option


{code}
2022-10-05 14:09:09,733 [setup] DEBUG s3a.S3AUtils 
(S3AUtils.java:createAWSCredentialProvider(691)) - Credential provider class is 
org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider
2022-10-05 14:09:09,733 [setup] DEBUG s3a.S3AUtils 
(S3AUtils.java:createAWSCredentialProvider(691)) - Credential provider class is 
org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
2022-10-05 14:09:09,734 [setup] WARN  s3a.SDKV2Upgrade 
(LogExactlyOnce.java:warn(39)) - Directly referencing AWS SDK V1 credential 
provider com.amazonaws.auth.EnvironmentVariableCredentialsProvider. AWS SDK V1 
credential providers will be removed once S3A is upgraded to SDK V2
2022-10-05 14:09:09,734 [setup] DEBUG s3a.S3AUtils 
(S3AUtils.java:createAWSCredentialProvider(691)) - Credential provider class is 
com.amazonaws.auth.EnvironmentVariableCredentialsProvider
2022-10-05 14:09:09,734 [setup] DEBUG s3a.S3AUtils 
(S3AUtils.java:createAWSCredentialProvider(691)) - Credential provider class is 
org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider

{code}

This is because the EnvironmentVariableCredentialsProvider provider is on the 
default list of providers.

Everybody who is using the S3 a connector and who has not explicitly declared a 
new set of providers excluding this one will be seeing the error message.

Proposed:

Don't warn on this provider. Instead with the v2 move the classname can be 
patched to switch to a modified one.

The alternative would be to provide an s3a specific env var provider subclass 
of this; and while that is potentially good in future it is a bit more effort 
for the forthcoming 3.3.5 release.
And especially because and it will not be in previous versions people cannot 
explicitly switch to it in their configs and be confident it will always be 
there,





--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18480) upgrade AWS SDK for release 3.3.5

2022-10-05 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18480:
---

 Summary: upgrade  AWS SDK for release 3.3.5
 Key: HADOOP-18480
 URL: https://issues.apache.org/jira/browse/HADOOP-18480
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build, fs/s3
Affects Versions: 3.3.5
Reporter: Steve Loughran
Assignee: Steve Loughran


go up to the latest sdk through the usual qualification process.

no doubt it'll be bigger...



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15807) ITestS3AContractRootDir failure on non-S3Guarded bucket

2022-10-05 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15807.
-
Resolution: Cannot Reproduce

> ITestS3AContractRootDir failure on non-S3Guarded bucket
> ---
>
> Key: HADOOP-15807
> URL: https://issues.apache.org/jira/browse/HADOOP-15807
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Got a root test failure against S3 London, possibly consistency related. 
> The abstract test case should use eventually() here



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17942) abfs & s3a FS instantiate triggers warning about deprecated io.bytes.per.checksum

2022-10-05 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17942.
-
Resolution: Duplicate

> abfs & s3a FS instantiate triggers warning about deprecated 
> io.bytes.per.checksum
> -
>
> Key: HADOOP-17942
> URL: https://issues.apache.org/jira/browse/HADOOP-17942
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common, conf
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Priority: Minor
>
> If you don't turn off the deprecation log, you get told off about 
> dfs.bytes-per-checksum
> {code}
> 2021-09-28 15:40:26,551 INFO Configuration.deprecation: io.bytes.per.checksum 
> is deprecated. Instead, use dfs.bytes-per-checksum
> {code}
> proposed
> * find out where it's used/set
> * stop it



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15460) S3A FS to add "fs.s3a.create.performance" to the builder file creation option set

2022-10-04 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15460.
-
Fix Version/s: 3.3.5
   Resolution: Fixed

> S3A FS to add  "fs.s3a.create.performance" to the builder file creation 
> option set
> --
>
> Key: HADOOP-15460
> URL: https://issues.apache.org/jira/browse/HADOOP-15460
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.5
>
>
> As promised to [~StephanEwen]: add and s3a-specific option to the builder-API 
> to create files for all existence checks to be skipped.
> This
> # eliminates a few hundred milliseconds
> # -avoids any caching of negative HEAD/GET responses in the S3 load 
> balancers.-
> Callers will be expected to know what what they are doing.
> FWIW, we are doing some PUT calls in the committer which bypass this stuff, 
> for the same reason. If you've just created a directory, you know there's 
> nothing underneath, so no need to check.
> adding this inside HADOOP-17833 as we are effectively doing this under the 
> magic dir tree. having it as an option and using it to save all 
> manifests/success files also saves one LIST per manifest write (task commit) 
> and the LIST when saving a _SUCCESS file.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13695) S3A to use a thread pool for async path operations

2022-10-04 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13695.
-
Fix Version/s: 3.3.5
   Resolution: Done

we have a thread pool

> S3A to use a thread pool for async path operations
> --
>
> Key: HADOOP-13695
> URL: https://issues.apache.org/jira/browse/HADOOP-13695
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Major
> Fix For: 3.3.5
>
>
> S3A path operations are often slow due to directory scanning, mock directory 
> create/delete, etc. Many of these can be done asynchronously
> * because deletion is eventually consistent, deleting parent dirs after an 
> operation has returned doesn't alter the behaviour, except in the special 
> case of : operation failure.
> * scanning for paths/parents of a file in the create operation only needs to 
> complete before the close() operation instantiates the object, no need to 
> block create().
> * parallelized COPY calls would permit asynchronous rename.
> We could either use the thread pool used for block writes, or somehow isolate 
> low cost path ops (GET, DELETE) from the more expensive calls (COPY, PUT) so 
> that a thread doing basic IO doesn't block for the duration of the long op. 
> Maybe also use {{Semaphore.tryAcquire()}} and only start async work if there 
> actually is an idle thread, doing it synchronously if not. Maybe it depends 
> on the operation. path query/cleanup before/after a write is something which 
> could be scheduled as just more futures to schedule in the block write.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16921) NPE in s3a byte buffer block upload

2022-10-04 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16921.
-
Resolution: Cannot Reproduce

> NPE in s3a byte buffer block upload
> ---
>
> Key: HADOOP-16921
> URL: https://issues.apache.org/jira/browse/HADOOP-16921
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
>
> NPE in s3a upload when fs.s3a.fast.upload.buffer = bytebuffer



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17077) S3A delegation token binding to support secondary binding list

2022-10-04 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17077.
-
Resolution: Won't Fix

> S3A delegation token binding to support secondary binding list
> --
>
> Key: HADOOP-17077
> URL: https://issues.apache.org/jira/browse/HADOOP-17077
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> (followon from HADOOP-17050)
> Add the ability of an S3A FS instance to support multiple instances of 
> delegation token bindings.
> The property "fs.s3a.delegation.token.secondary.bindings" will list the 
> classnames of all secondary bindings.
> for each one, an instance shall be created with the canonical service name 
> being: fs URI + [ tokenKind ]. This is to ensure that the URIs are unique for 
> each FS instance -but also that a single fs instance can have multiple tokens 
> in the credential list.
> the instance is just a AbstractDelegationTokenBinding provider of an AWS 
> credential provider chain, with the normal lifecycle and operations to bind 
> to a DT, issue tokens, etc
> * the final list of AWS Credential providers will be built by appending those 
> provided by each binding in turn.
> Token binding at launch
> If the primary token binding binds to a delegation token, then the whole 
> binding is changed such that all secondary tokens MUST also bind. That is: it 
> will be an error if one cannot be found. This is  possibly overstrict-but it 
> avoids situations where an incomplete set of tokens are retrieved and This 
> does not surface until later.
> Only the encryption secrets in the primary DT will be used for FS encryption 
> settings.
> Testing: yes.
> Probably also by adding a test-only DT provider which doesn't actually issue 
> any real credentials and so which can be deployed in both ITests and staging 
> tests where we can verify that the chained instantiation works.
> Compatibility: the goal is to be backwards compatible with any already 
> released token provider plugin.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16741) Document `dynamodb:TagResource` an explicit client-side permission for S3Guard

2022-10-04 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16741.
-
Resolution: Won't Fix

> Document `dynamodb:TagResource` an explicit client-side permission for S3Guard
> --
>
> Key: HADOOP-16741
> URL: https://issues.apache.org/jira/browse/HADOOP-16741
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
>
> We now attempt to tag a DDB table on init if it is untagged (HADOOP-16520). 
> This isn't covered in the documentation (assumed_roles.md), or in the set of 
> permissions generated in {{RolePolicies.allowS3GuardClientOperations()}} 
> where it is used to create assumed role permissions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18477) Über-jira: S3A Hadoop 3.3.9 features

2022-10-04 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18477:
---

 Summary: Über-jira: S3A Hadoop 3.3.9 features
 Key: HADOOP-18477
 URL: https://issues.apache.org/jira/browse/HADOOP-18477
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 3.3.5
Reporter: Steve Loughran
Assignee: Mehakmeet Singh


Changes related to s3a in the next branch-3.3 release. 
Presence in this list != any commitment to implement, unless there's active dev



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18476) Abfs and S3A FileContext bindings to close wrapped filesystems in finalizer

2022-10-04 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18476:
---

 Summary: Abfs and S3A FileContext bindings to close wrapped 
filesystems in finalizer
 Key: HADOOP-18476
 URL: https://issues.apache.org/jira/browse/HADOOP-18476
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure, fs/s3
Affects Versions: 3.3.4
Reporter: Steve Loughran
Assignee: Steve Loughran


if you use the FileContext APIs to talk to abfs or s3a, it creates a new 
wrapped FileSystem implementation, and, because there is no close() call, never 
cleans up.

proposed: add finalizers for these two classes, which we know create helper 
threads, especially if plugins are added



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18466) Limit the findbugs suppression IS2_INCONSISTENT_SYNC to S3AFileSystem field

2022-09-26 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18466.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> Limit the findbugs suppression IS2_INCONSISTENT_SYNC to S3AFileSystem field
> ---
>
> Key: HADOOP-18466
> URL: https://issues.apache.org/jira/browse/HADOOP-18466
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> Limit the findbugs suppression IS2_INCONSISTENT_SYNC to S3AFileSystem field 
> futurePool to avoid letting it discover other synchronization bugs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18456) NullPointerException in ObjectListingIterator's constructor

2022-09-23 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18456.
-
Fix Version/s: 3.3.9
   Resolution: Fixed

> NullPointerException in ObjectListingIterator's constructor
> ---
>
> Key: HADOOP-18456
> URL: https://issues.apache.org/jira/browse/HADOOP-18456
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.9
>Reporter: Quanlong Huang
>Assignee: Steve Loughran
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.3.9
>
>
> We saw NullPointerExceptions in Impala's S3 tests: IMPALA-11592. It's thrown 
> from the hadoop jar:
> {noformat}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.s3a.Listing$ObjectListingIterator.(Listing.java:621)
> at 
> org.apache.hadoop.fs.s3a.Listing.createObjectListingIterator(Listing.java:163)
> at 
> org.apache.hadoop.fs.s3a.Listing.createFileStatusListingIterator(Listing.java:144)
> at 
> org.apache.hadoop.fs.s3a.Listing.getListFilesAssumingDir(Listing.java:212)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerListFiles(S3AFileSystem.java:4790)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listFiles$37(S3AFileSystem.java:4732)
> at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:543)
> at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:524)
> at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:445)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2363)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2382)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.listFiles(S3AFileSystem.java:4731)
> at 
> org.apache.impala.common.FileSystemUtil.listFiles(FileSystemUtil.java:754)
> ... {noformat}
> We are using a private build of the hadoop jar. Version: CDP 
> 3.1.1.7.2.16.0-164
> Code snipper of where the NPE throws:
> {code:java}
> 604 @Retries.RetryRaw
> 605 ObjectListingIterator(
> 606 Path listPath,
> 607 S3ListRequest request,
> 608 AuditSpan span) throws IOException {
> 609   this.listPath = listPath;
> 610   this.maxKeys = listingOperationCallbacks.getMaxKeys();
> 611   this.request = request;
> 612   this.objectsPrev = null;
> 613   this.iostats = iostatisticsStore()
> 614   .withDurationTracking(OBJECT_LIST_REQUEST)
> 615   .withDurationTracking(OBJECT_CONTINUE_LIST_REQUEST)
> 616   .build();
> 617   this.span = span;
> 618   this.s3ListResultFuture = listingOperationCallbacks
> 619   .listObjectsAsync(request, iostats, span);
> 620   this.aggregator = 
> IOStatisticsContext.getCurrentIOStatisticsContext()
> 621   .getAggregator();   // < thrown here
> 622 }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18242) ABFS Rename Failure when tracking metadata is in incomplete state

2022-09-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18242.
-
Fix Version/s: 3.3.9
   Resolution: Fixed

> ABFS Rename Failure when tracking metadata is in incomplete state
> -
>
> Key: HADOOP-18242
> URL: https://issues.apache.org/jira/browse/HADOOP-18242
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.9
>
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> If a node in the datacenter crashes while processing an operation, 
> occasionally it can leave the Storage-internal blob tracking metadata in an 
> incomplete state.  We expect this to happen occasionally, and so all API’s 
> are designed in such a way that if this incomplete state is observed on a 
> blob, the situation is resolved before the current operation proceeds.  
> However, this incident has exposed a bug specifically with the Rename API, 
> where the incomplete state fails to resolve, leading to this incorrect 
> failure.  As a temporary mitigation, if any other operation is performed on 
> this blob – GetBlobProperties, GetBlob, GetFileProperties, SetFileProperties, 
> etc – it should resolve the incomplete state, and rename will no longer hit 
> this issue.
> StackTrace:
> {code:java}
> 2022-03-22 17:52:19,789 DEBUG [regionserver/euwukwlss-hg50:16020.logRoller] 
> services.AbfsClient: HttpRequest: 
> 404,RenameDestinationParentPathNotFound,cid=ef5cbf0f-5d4a-4630-8a59-3d559077fc24,rid=35fef164-101f-000b-1b15-3ed81800,sent=0,recv=212,PUT,https://euwqdaotdfdls03.dfs.core.windows.net/eykbssc/apps/hbase/data/oldWALs/euwukwlss-hg50.tdf.qa%252C16020%252C1647949929877.1647967939315?timeout=90
>{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18460) ITestS3AContractVectoredRead.jtestStopVectoredIoOperationsUnbuffer failing

2022-09-20 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18460:
---

 Summary: 
ITestS3AContractVectoredRead.jtestStopVectoredIoOperationsUnbuffer failing
 Key: HADOOP-18460
 URL: https://issues.apache.org/jira/browse/HADOOP-18460
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.4.0
 Environment: seeing a test failure in both parallel and single test 
case runs of 
{{ITestS3AContractVectoredRead.testStopVectoredIoOperationsUnbuffer))
Reporter: Steve Loughran






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18377) hadoop-aws maven build to add a prefetch profile to run all tests with prefetching

2022-09-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18377.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> hadoop-aws maven build to add a prefetch profile to run all tests with 
> prefetching
> --
>
> Key: HADOOP-18377
> URL: https://issues.apache.org/jira/browse/HADOOP-18377
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> Add a prefetch profile to the hadoop-aws build so tests run with prefetching 
> on, similar to how the markers option does
> makes it easy to test everything with prefetching on/off without editing xml 
> files.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18448) s3a endpoint per bucket configuration in pyspark is ignored

2022-09-19 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18448.
-
Resolution: Invalid

> s3a endpoint per bucket configuration in pyspark is ignored
> ---
>
> Key: HADOOP-18448
> URL: https://issues.apache.org/jira/browse/HADOOP-18448
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1
>Reporter: Einav Hollander
>Priority: Major
>
> I'm using EMR emr-6.5.0 cluster in us-east-1 with ec2 instances. cluster is 
> running spark application using pyspark 3.2.1
>  EMR is using Hadoop distribution:Amazon 3.2.1
> my spark application is reading from one bucket in us-west-2 and writing to a 
> bucket in us-east-1.
> since I'm processing a large amount of data I'm paying a lot of money for the 
> network transport . in order to reduce the cost I have create a vpc interface 
> to s3 endpoint in us-west-2. inside the spark application I'm using aws cli 
> for reading the file names from us-west-2 bucket and it is working through 
> the s3 interface endpoint but when I use pyspark to read the data it is using 
> the us-east-1 s3 endpoint instead of the us-west-2 endpoint.
>  I tried to use per bucket configuration but it is being ignored although I 
> added it to the defualt configuration and to spark submit call.
> I tried to set the following configuration but they are ignored:
>  '--conf', 
> "spark.hadoop.fs.s3a.aws.credentials.provider=com.amazonaws.auth.DefaultAWSCredentialsProviderChain",
>  '--conf', "spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem",
>  '--conf', "spark.hadoop.fs.s3a.bucket..endpoint= vpc endpoint>",
>  '--conf', "spark.hadoop.fs.s3a.bucket. -name>.endpoint.region=us-west-2",
>  '--conf', "spark.hadoop.fs.s3a.bucket..endpoint= gateway endpoint>",
>  '--conf', "spark.hadoop.fs.s3a.bucket. -name>.endpoint.region=us-east-1",
>  '--conf', "spark.hadoop.fs.s3a.path.style.access=false"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18186) s3a prefetching to use SemaphoredDelegatingExecutor for submitting work

2022-09-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18186.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> s3a prefetching to use SemaphoredDelegatingExecutor for submitting work
> ---
>
> Key: HADOOP-18186
> URL: https://issues.apache.org/jira/browse/HADOOP-18186
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> Use SemaphoredDelegatingExecutor for each to stream to submit work, if 
> possible, for better fairness in processes with many streams.
> this also takes a DurationTrackerFactory to count how long was spent in the 
> queue, something we would want to know



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18442) Remove the hadoop-openstack module

2022-09-05 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18442:
---

 Summary: Remove the hadoop-openstack module
 Key: HADOOP-18442
 URL: https://issues.apache.org/jira/browse/HADOOP-18442
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.3.9
Reporter: Steve Loughran
Assignee: Steve Loughran


 the openstack module doesn't get tested or maintained; it's just something 
else to keep up to date security wise. As nobody ever files bugs on it it is 
clearly not being used either.
 
 On-prem object stores support the S3 APIs and/or provide their own hadoop 
connectors (ozone, IBM).
 
 Let's just cut it completely. As someone who co-authored a lot of it I am 
happy to do the duty. I will do a quick review of all test to see if there are 
any left which we could pull into hadoop common...the FS contract tests were 
initially derived from the ones I did here.
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18423) TestArnResource.parseAccessPointFromArn failing intermittently

2022-09-05 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18423.
-
Resolution: Duplicate

> TestArnResource.parseAccessPointFromArn failing intermittently
> --
>
> Key: HADOOP-18423
> URL: https://issues.apache.org/jira/browse/HADOOP-18423
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.3.3
>Reporter: groot
>Assignee: groot
>Priority: Major
>
> TestArnResource.parseAccessPointFromArn failing with
>  
> {code}
> |org.junit.ComparisonFailure: Endpoint does not match 
> expected: but 
> was:|
> |at org.junit.Assert.assertEquals(Assert.java:117)|
> |at 
> org.apache.hadoop.fs.s3a.TestArnResource.parseAccessPointFromArn(TestArnResource.java:60)|
> |at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)|
> |at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)|
> |at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)|
> |at java.lang.reflect.Method.invoke(Method.java:498)|
> |at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)|
> |at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)|
> |at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)|
> |at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)|
> |at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)|
> |at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)|
> |at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)|
> |at java.util.concurrent.FutureTask.run(FutureTask.java:266)|
> |at java.lang.Thread.run(Thread.java:748)|
> | |
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18410) S3AInputStream.unbuffer() async drain not releasing http connections

2022-09-05 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18410.
-
Fix Version/s: 3.4.0
   3.3.9
   Resolution: Fixed

> S3AInputStream.unbuffer() async drain not releasing http connections
> 
>
> Key: HADOOP-18410
> URL: https://issues.apache.org/jira/browse/HADOOP-18410
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.9
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> Impala tcp-ds setup to s3 is hitting problems with timeout fetching http 
> connections from the s3a fs pool. Disabling s3a async drain makes this 
> problem *go away*. assumption, either those async ops are blocking, or they 
> are not releasing references properly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16259) Distcp to set S3 Storage Class

2022-09-01 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16259.
-
Fix Version/s: 3.4.0
   3.3.9
   Resolution: Duplicate

> Distcp to set S3 Storage Class
> --
>
> Key: HADOOP-16259
> URL: https://issues.apache.org/jira/browse/HADOOP-16259
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, tools/distcp
>Affects Versions: 2.8.4
>Reporter: Prakash Gopalsamy
>Priority: Minor
> Fix For: 3.4.0, 3.3.9
>
> Attachments: ENHANCE_HADOOP_DISTCP_FOR_CUSTOM_S3_STORAGE_CLASS.docx, 
> ENHANCE_HADOOP_DISTCP_FOR_CUSTOM_S3_STORAGE_CLASS.docx.pdf
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Hadoop distcp implementation doesn’t have properties to override Storage 
> class while transferring data to Amazon S3 storage. Hadoop distcp doesn’t set 
> any storage class while transferring data to Amazon S3 storage. Due to this 
> all the objects moved from cluster to S3 using Hadoop Distcp are been stored 
> in the default storage class “STANDARD”. By providing a new feature to 
> override the default S3 storage class through configuration properties will 
> be helpful to upload objects in other storage classes. I have come up with a 
> design to implement this feature in a design document and uploaded the same 
> in the JIRA. Kindly review and let me know for your suggestions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18339) S3A storage class option only picked up when buffering writes to disk

2022-09-01 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18339.
-
Fix Version/s: 3.4.0
   3.3.9
   Resolution: Fixed

> S3A storage class option only picked up when buffering writes to disk
> -
>
> Key: HADOOP-18339
> URL: https://issues.apache.org/jira/browse/HADOOP-18339
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.9
>Reporter: Steve Loughran
>Assignee: Monthon Klongklaew
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> when you switch s3a output stream buffering to heap or byte buffer, the 
> storage class option isn't added to the put request
> {code}
>   
> fs.s3a.fast.upload.buffer
> bytebuffer
>   
> {code}
> and the ITestS3AStorageClass tests fail.
> {code}
> java.lang.AssertionError: [Storage class of object 
> s3a://stevel-london/test/testCreateAndCopyObjectWithStorageClassGlacier/file1]
>  
> Expecting:
>  
> to be equal to:
>  <"glacier">
> ignoring case considerations
>   at 
> org.apache.hadoop.fs.s3a.ITestS3AStorageClass.assertObjectHasStorageClass(ITestS3AStorageClass.java:215)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3AStorageClass.testCreateAndCopyObjectWithStorageClassGlacier(ITestS3AStorageClass.java:129)
> {code}
> we noticed this in a code review; the request factory only sets the option 
> when the source is a file, not memory.
> proposed: parameterize the test suite on disk/byte buffer, then fix



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18432) hadoop 3.3.4 doesn't have a binary-aarch64 download link

2022-08-31 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18432.
-
Resolution: Duplicate

> hadoop 3.3.4 doesn't have a binary-aarch64 download link
> 
>
> Key: HADOOP-18432
> URL: https://issues.apache.org/jira/browse/HADOOP-18432
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.4
>Reporter: kangking
>Priority: Major
> Attachments: image-2022-08-31-11-16-56-994.png, 
> image-2022-08-31-11-17-31-812.png
>
>
> [Apache Hadoop|https://hadoop.apache.org/releases.html] 
> !image-2022-08-31-11-16-56-994.png!
> the link is empty
>  
> !image-2022-08-31-11-17-31-812.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18417) Upgrade Maven Surefire plugin

2022-08-24 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18417.
-
Fix Version/s: 3.4.0
   3.3.9
   Resolution: Fixed

> Upgrade Maven Surefire plugin
> -
>
> Key: HADOOP-18417
> URL: https://issues.apache.org/jira/browse/HADOOP-18417
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0, 3.3.9
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> The Maven Surefire plugin 3.0.0-M1 doesn't always including the launcher as 
> part of it's setup, which can cause problems with Yarn tests. Some of the 
> Yarn modules use Jupiter, which may be a complicating factor.  Switching to 
> 3.0.0-M7 fixes the issue.
> This is currently blocking MAPREDUCE-7386



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18416) ITestS3AIOStatisticsContext failure

2022-08-23 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18416:
---

 Summary: ITestS3AIOStatisticsContext failure
 Key: HADOOP-18416
 URL: https://issues.apache.org/jira/browse/HADOOP-18416
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.3.9
Reporter: Steve Loughran


test failure running the new ITestS3AIOStatisticsContext. attaching the stack 
and log file.

This happened on a large (12 thread) test run, but i can get it to come back 
intermittently on repeated runs of the whole suite, but never when i just run 
the single test case.

{code}
[ERROR] 
testThreadIOStatisticsForDifferentThreads(org.apache.hadoop.fs.s3a.ITestS3AIOStatisticsContext)
  Time elapsed: 3.616 s  <<< FAILURE!
java.lang.AssertionError: 
[Counter named stream_write_bytes] 
Expecting actual not to be null
at 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.lookupStatistic(IOStatisticAssertions.java:160)
at 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.assertThatStatisticLong(IOStatisticAssertions.java:291)
at 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.assertThatStatisticCounter(IOStatisticAssertions.java:306)
at 
org.apache.hadoop.fs.s3a.ITestS3AIOStatisticsContext.assertThreadStatisticsForThread(ITestS3AIOStatisticsContext.java:367)
at 
org.apache.hadoop.fs.s3a.ITestS3AIOStatisticsContext.testThreadIOStatisticsForDifferentThreads(ITestS3AIOStatisticsContext.java:260)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:750)
{code}

I'm suspecting some race condition *or* gc pressure is releasing that reference 
in the worker thread.

proposed test changes
* worker thread changes its thread ID for the logs
* stores its thread context into a field, so there's guarantee of no GC
* logs more as it goes along.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18380) fs.s3a.prefetch.block.size to be read through longBytesOption

2022-08-23 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18380.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> fs.s3a.prefetch.block.size to be read through longBytesOption
> -
>
> Key: HADOOP-18380
> URL: https://issues.apache.org/jira/browse/HADOOP-18380
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> use   {{longBytesOption(fs.s3a.prefetch.block.size)}}
> this allows for unitgs like M to be used, and is consistent with the rest of 
> the s3a size params. also sets a minimum size to prevent negatives values



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18410) S3AInputStream async drain not releasing http connections

2022-08-19 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18410:
---

 Summary: S3AInputStream async drain not releasing http connections
 Key: HADOOP-18410
 URL: https://issues.apache.org/jira/browse/HADOOP-18410
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.9
Reporter: Steve Loughran
Assignee: Steve Loughran


Impala tcp-ds setup to s3 is hitting problems with timeout fetching http 
connections from the s3a fs pool. Disabling s3a async drain makes this problem 
*go away*. assumption, either those async ops are blocking, or they are not 
releasing references properly.





--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17882) distcp to use openFile() with sequential IO; ranges of reads

2022-08-19 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17882.
-
Fix Version/s: 3.3.9
   Resolution: Fixed

> distcp to use openFile() with sequential IO; ranges of reads
> 
>
> Key: HADOOP-17882
> URL: https://issues.apache.org/jira/browse/HADOOP-17882
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: tools/distcp
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Priority: Major
> Fix For: 3.3.9
>
>
> once openFile adds standard options for sequential access, distcp to adopt so 
> as to enforce sequential reads on all uploads/backups



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18385) ITestS3ACannedACLs failure; not in a span

2022-08-18 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18385.
-
Fix Version/s: 3.3.9
   Resolution: Fixed

> ITestS3ACannedACLs failure; not in a span
> -
>
> Key: HADOOP-18385
> URL: https://issues.apache.org/jira/browse/HADOOP-18385
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Reporter: Steve Loughran
>Assignee: groot
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.9
>
>
> seen in a test of the prefetcn feature branch, but it looks more like this 
> has been lurking for a long time, or just that some code change has moved the 
> api call out of a span.
> {code}
> [INFO] Running org.apache.hadoop.fs.s3a.ITestS3ACannedACLs
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.592 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3ACannedACLs
> [ERROR] 
> testCreatedObjectsHaveACLs(org.apache.hadoop.fs.s3a.ITestS3ACannedACLs)  Time 
> elapsed: 0.591 s  <<< ERROR!
> org.apache.hadoop.fs.s3a.audit.AuditFailureException: 
> dbb71c86-e022-4b76-99cf-c1f64dd21389-00013058 unaudited operation executing a 
> request outside an audit span 
> {com.amazonaws.services.s3.model.GetObjectAclRequest size=0, mutating=true}
> at 
> org.apache.hadoop.fs.s3a.ITestS3ACannedACLs.assertObjectHasLoggingGrant(ITestS3ACannedACLs.java:94)
> at 
> org.apache.hadoop.fs.s3a.ITestS3ACannedACLs.testCreatedObjectsHaveACLs(ITestS3ACannedACLs.java:69)
> {code}
> fix is trivial: do the operation within a span



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18181) move org.apache.hadoop.fs.common package into hadoop-common module

2022-08-17 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18181.
-
Resolution: Fixed

>  move org.apache.hadoop.fs.common package into hadoop-common module
> ---
>
> Key: HADOOP-18181
> URL: https://issues.apache.org/jira/browse/HADOOP-18181
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> move org.apache.hadoop.fs.common package from hadoop-aws, along with any 
> tests, into hadoop-common jar and the+ package org.apache.hadoop.fs.impl
> (except for any bits we find are broadly useful in applications to use any 
> new APIs, in which case somewhere more public, such as  o.a.h.util.functional 
> for the futures work)
> we can and should pick on new package and move the classes there, even while 
> they are in hadoop-aws. why so? lets us add checkstyle/findbugs rules with 
> the final classnames



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18187) Convert s3a prefetching to use JavaDoc for fields and enums

2022-08-17 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18187.
-
Resolution: Fixed

fixed in the big HADOOP-18181 patch

> Convert s3a prefetching to use JavaDoc for fields and enums
> ---
>
> Key: HADOOP-18187
> URL: https://issues.apache.org/jira/browse/HADOOP-18187
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Daniel Carl Jones
>Assignee: Steve Loughran
>Priority: Minor
>
> There's lots of good comments for fields and enum values in the current code. 
> Let's expose these to your IDE with JavaDoc instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18318) Update class names to be clear they belong to S3A prefetching

2022-08-17 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18318.
-
Resolution: Fixed

fixed in the big HADOOP-18181 patch

> Update class names to be clear they belong to S3A prefetching
> -
>
> Key: HADOOP-18318
> URL: https://issues.apache.org/jira/browse/HADOOP-18318
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Daniel Carl Jones
>Priority: Minor
>
> tune classnames, e.g S3InputStream -> S3ABufferedStream, S3Reader -> 
> StoreBlockReader, S3File -> OpenS3File. I think we just want to get the S3 
> prefixes off as all too often that means an AWS SDK class, not something in 
> our own code



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18405) abfs testReadAndWriteWithDifferentBufferSizesAndSeek failure

2022-08-15 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18405:
---

 Summary: abfs testReadAndWriteWithDifferentBufferSizesAndSeek 
failure
 Key: HADOOP-18405
 URL: https://issues.apache.org/jira/browse/HADOOP-18405
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.4.0
Reporter: Steve Loughran


(possibly transient) failure of testReadAndWriteWithDifferentBufferSizesAndSeek 
on a parallel test run.

this was a run done with a VPN enabled; this may be causing problems. certainly 
the run was slow

{code}
[ERROR] Tests run: 8, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 701.903 
s <<< FAILURE! - in org.apache.hadoop.fs.azurebfs.ITestAbfsReadWriteAndSeek
[ERROR] 
testReadAndWriteWithDifferentBufferSizesAndSeek[Size=104,857,600](org.apache.hadoop.fs.azurebfs.ITestAbfsReadWriteAndSeek)
  Time elapsed: 673.614 s  <<< FAILURE!
org.junit.ComparisonFailure: [Retry was required due to issue on server side] 
expected:<[0]> but was:<[1]>
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at 
org.apache.hadoop.fs.azurebfs.utils.TracingHeaderValidator.validateBasicFormat(TracingHeaderValidator.java:136)
at 
org.apache.hadoop.fs.azurebfs.utils.TracingHeaderValidator.validateTracingHeader(TracingHeaderValidator.java:77)
at 
org.apache.hadoop.fs.azurebfs.utils.TracingHeaderValidator.callTracingHeaderValidator(TracingHeaderValidator.java:46)
at 
org.apache.hadoop.fs.azurebfs.utils.TracingContext.constructHeader(TracingContext.java:172)
at 
org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(AbfsRestOperation.java:249)
at 
org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.completeExecute(AbfsRestOperation.java:217)
at 
org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.lambda$execute$0(AbfsRestOperation.java:191)
at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.measureDurationOfInvocation(IOStatisticsBinding.java:494)
at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDurationOfInvocation(IOStatisticsBinding.java:465)
at 
org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:189)
at 
org.apache.hadoop.fs.azurebfs.services.AbfsClient.read(AbfsClient.java:853)
at 
org.apache.hadoop.fs.azurebfs.services.AbfsInputStream.readRemote(AbfsInputStream.java:544)
at 
org.apache.hadoop.fs.azurebfs.services.AbfsInputStream.readInternal(AbfsInputStream.java:510)
at 
org.apache.hadoop.fs.azurebfs.services.AbfsInputStream.readOneBlock(AbfsInputStream.java:317)
at 
org.apache.hadoop.fs.azurebfs.services.AbfsInputStream.read(AbfsInputStream.java:263)
at java.io.DataInputStream.read(DataInputStream.java:149)
at 
org.apache.hadoop.fs.azurebfs.ITestAbfsReadWriteAndSeek.testReadWriteAndSeek(ITestAbfsReadWriteAndSeek.java:110)
at 
org.apache.hadoop.fs.azurebfs.ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek(ITestAbfsReadWriteAndSeek.java:69)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:750)

[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Failures: 
[ERROR]   
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:69->testReadWriteAndSeek:110
 [Retry was required due to issue on server side] expected:<[0]> but was:<[1]>
[INFO] 
[ERROR] Tests run: 332, Failures: 1, Errors: 

[jira] [Resolved] (HADOOP-18397) Shutdown AWSSecurityTokenService when its resources are no longer in use

2022-08-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18397.
-
Fix Version/s: 3.4.0
   3.3.9
   Resolution: Fixed

> Shutdown AWSSecurityTokenService when its resources are no longer in use
> 
>
> Key: HADOOP-18397
> URL: https://issues.apache.org/jira/browse/HADOOP-18397
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/s3
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> AWSSecurityTokenService resources can be released whenever they are no longer 
> in use. The documentation of AWSSecurityTokenService#shutdown says while it 
> is not important for client to compulsorily shutdown the token service, 
> client can definitely perform early release whenever client no longer 
> requires token service resources. We achieve this by making STSClient 
> closable, so we can certainly utilize it in all places where it's suitable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18340) deleteOnExit does not work with S3AFileSystem

2022-08-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18340.
-
Fix Version/s: 3.3.9
   Resolution: Fixed

> deleteOnExit does not work with S3AFileSystem
> -
>
> Key: HADOOP-18340
> URL: https://issues.apache.org/jira/browse/HADOOP-18340
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.3
>Reporter: Huaxiang Sun
>Assignee: Huaxiang Sun
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.9
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> When deleteOnExit is set on some paths, they are not removed when file system 
> object is closed. The following exception is logged when printing out the 
> exception in info log.
> {code:java}
> 2022-07-15 19:29:12,552 [main] INFO  fs.FileSystem 
> (FileSystem.java:processDeleteOnExit(1810)) - Ignoring failure to 
> deleteOnExit for path /file, exception {}
> java.io.IOException: s3a://mock-bucket: FileSystem is closed!
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.checkNotClosed(S3AFileSystem.java:3887)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2333)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2355)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.exists(S3AFileSystem.java:4402)
>         at 
> org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1805)
>         at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2669)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.close(S3AFileSystem.java:3830)
>         at 
> org.apache.hadoop.fs.s3a.TestS3AGetFileStatus.testFile(TestS3AGetFileStatus.java:87)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: 

[jira] [Created] (HADOOP-18402) S3A committer NPE in spark job abort

2022-08-11 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18402:
---

 Summary: S3A committer NPE in spark job abort
 Key: HADOOP-18402
 URL: https://issues.apache.org/jira/browse/HADOOP-18402
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.3.9
Reporter: Steve Loughran
Assignee: Steve Loughran


NPE happening in spark {{HadoopMapReduceCommitProtocol.abortJob}} when jobID is 
null


{code}
- save()/findClass() - non-partitioned table - Overwrite *** FAILED ***
  java.lang.NullPointerException:
  at 
org.apache.hadoop.fs.s3a.commit.impl.CommitContext.(CommitContext.java:159)
  at 
org.apache.hadoop.fs.s3a.commit.impl.CommitOperations.createCommitContext(CommitOperations.java:652)
  at 
org.apache.hadoop.fs.s3a.commit.AbstractS3ACommitter.initiateJobOperation(AbstractS3ACommitter.java:856)
  at 
org.apache.hadoop.fs.s3a.commit.AbstractS3ACommitter.abortJob(AbstractS3ACommitter.java:909)
  at 
org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.abortJob(HadoopMapReduceCommitProtocol.scala:252)
  at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:268)
  at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:191)
  at 
org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:113)
  at 
org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:111)
  at 
org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:125)
  ...

{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18399) SingleFilePerBlockCache to use LocalDirAllocator for file allocatoin

2022-08-10 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18399:
---

 Summary: SingleFilePerBlockCache to use LocalDirAllocator for file 
allocatoin
 Key: HADOOP-18399
 URL: https://issues.apache.org/jira/browse/HADOOP-18399
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Steve Loughran


prefetching stream's SingleFilePerBlockCache uses Files.tempFile() to allocate 
a temp file.

it should be using LocalDirAllocator to allocate space from a list of dirs, 
taking a config key to use. for s3a we will use the Constants.BUFFER_DIR 
option, which on yarn deployments is fixed under the env.LOCAL_DIR path, so 
automatically cleaned up on container exit



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18305) Release Hadoop 3.3.4: minor update of hadoop-3.3.3

2022-08-10 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18305.
-
Fix Version/s: 3.3.4
   Resolution: Fixed

> Release Hadoop 3.3.4: minor update of hadoop-3.3.3
> --
>
> Key: HADOOP-18305
> URL: https://issues.apache.org/jira/browse/HADOOP-18305
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.3.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.4
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Create a Hadoop 3.3.4 release with
> * critical fixes
> * ARM artifacts as well as the intel ones



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18393) Hadoop 3.3.2 have CVE coming from dependencies

2022-08-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18393.
-
Fix Version/s: 3.3.4
   Resolution: Duplicate

> Hadoop 3.3.2 have CVE coming from dependencies
> --
>
> Key: HADOOP-18393
> URL: https://issues.apache.org/jira/browse/HADOOP-18393
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.2
>Reporter: suman agrawal
>Priority: Major
> Fix For: 3.3.4
>
>
> Hi Team,
>  
> Hadoop version 3.3.1 which is compatible for our application have 
> Vulnerebilities:
> Is there any plan to fix this
> CVE-2021-37404 hadoop versions < 3.3.2 Apache Hadoop potential heap buffer 
> overflow in libhdfs.
> CVE-2020-10650 jackson < 2.9.10.4
> CVE-2021-33036 hadoop < 3.3.2
> CVE-2022-31159 aws xfer manager download < 1.12.262



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18373) IOStatisticsContext tuning

2022-08-08 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18373.
-
Fix Version/s: 3.3.9
   Resolution: Fixed

> IOStatisticsContext tuning
> --
>
> Key: HADOOP-18373
> URL: https://issues.apache.org/jira/browse/HADOOP-18373
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.9
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.9
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Tuning of the IOStatisticsContext code
> h2. change property name  to fs.iostatistics
> there are other fs.iostatistics options, the new one needs consistent naming
> h2. enable in hadoop-aws
> edit core-site.xml in hadoop-aws/test/resources to always collect context 
> iOStatistics
> This helps qualify the code
> {code}
> 
>   fs.thread.level.iostatistics.enabled
>   true
> 
> {code}
> h3.  IOStatisticsContext to add add static probe to see if it is enabled.
> lets apps know not to bother collecting/reporting



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17380) ITestS3AContractSeek.teardown closes FS before superclass does its cleanup

2022-08-08 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17380.
-
Resolution: Duplicate

> ITestS3AContractSeek.teardown closes FS before superclass does its cleanup
> --
>
> Key: HADOOP-17380
> URL: https://issues.apache.org/jira/browse/HADOOP-17380
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> ITestS3AContractSeek.teardown closes the FS, but because it does it before 
> calling super.teardown, the superclass doesn't get the opportunity to delete 
> the test dirs.
> Proposed: change the order. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18323) Bump javax.ws.rs-api To Version 3.1.0

2022-08-05 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18323.
-
Resolution: Not A Problem

HADOOP-18332 makes this unneeded. closing

> Bump javax.ws.rs-api To Version 3.1.0
> -
>
> Key: HADOOP-18323
> URL: https://issues.apache.org/jira/browse/HADOOP-18323
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.3
>Reporter: groot
>Assignee: groot
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Current version of  javax.ws.rs-api is 2.1.1 - which has a vulnerable 
> dependency to 
> [CVE-2020-15250|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15250]
> Lets Upgrade to 3.1.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18386) ITestS3SelectLandsat timeout after 10 minutes

2022-08-05 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18386.
-
Resolution: Duplicate

> ITestS3SelectLandsat timeout after 10 minutes
> -
>
> Key: HADOOP-18386
> URL: https://issues.apache.org/jira/browse/HADOOP-18386
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.4.0
> Environment: both trunk and the s3a prefetch feature branch; no vpn 
> active
>Reporter: Steve Loughran
>Priority: Major
>
> timeout doing a full read of the s3 select file through the gzip codec.
> {code}
> [ERROR] 
> testSelectSeekFullLandsat(org.apache.hadoop.fs.s3a.select.ITestS3SelectLandsat)
>   Time elapsed: 600.006 s  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 60 
> milliseconds
>   at java.lang.Throwable.getStackTraceElement(Native Method)
>   at java.lang.Throwable.getOurStackTrace(Throwable.java:828)
>   at java.lang.Throwable.getStackTrace(Throwable.java:817)
>   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.log4j.spi.LocationInfo.(LocationInfo.java:139)
>   at 
> org.apache.log4j.spi.LoggingEvent.getLocationInformation(LoggingEvent.java:253)
>   at 
> org.apache.log4j.helpers.PatternParser$LocationPatternConverter.convert(PatternParser.java:500)
>   at 
> org.apache.log4j.helpers.PatternConverter.format(PatternConverter.java:65)
>   at org.apache.log4j.PatternLayout.format(PatternLayout.java:506)
>   at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:310)
>   at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
>   at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>   at 
> org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>   at org.apache.log4j.Category.callAppenders(Category.java:206)
>   at org.apache.log4j.Category.forcedLog(Category.java:391)
>   at org.apache.log4j.Category.log(Category.java:856)
>   at org.slf4j.impl.Log4jLoggerAdapter.debug(Log4jLoggerAdapter.java:230)
>   at org.apache.hadoop.util.DurationInfo.close(DurationInfo.java:101)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:123)
>   at 
> org.apache.hadoop.fs.s3a.select.SelectInputStream.read(SelectInputStream.java:246)
>   at 
> org.apache.hadoop.fs.s3a.select.SelectInputStream.seek(SelectInputStream.java:324)
>   at 
> org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:73)
>   at 
> org.apache.hadoop.fs.s3a.select.AbstractS3SelectTest.seek(AbstractS3SelectTest.java:701)
>   at 
> org.apache.hadoop.fs.s3a.select.ITestS3SelectLandsat.testSelectSeekFullLandsat(ITestS3SelectLandsat.java:427)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.j
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18391) harden VectoredReadUtils

2022-08-04 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18391:
---

 Summary: harden VectoredReadUtils
 Key: HADOOP-18391
 URL: https://issues.apache.org/jira/browse/HADOOP-18391
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.3.9
Reporter: Steve Loughran
Assignee: Mukund Thakur


harden the VectoredReadUtils methods for consistent and more robust use, 
especially in those filesystems which don't have the api.

VectoredReadUtils.readInDirectBuffer should allocate a max buffer size, .e.g 
4mb, then do repeated reads and copies; this ensures that you don't OOM with 
many threads doing ranged requests. other libs do this.

readVectored to call validateNonOverlappingAndReturnSortedRanges before 
iterating

this ensures the abfs/s3a requirements are always met, and that because ranges 
will be read in order, prefetching by other clients will keep their performance 
good.

readVectored to add special handling for 0 byte ranges



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15964) Add S3A support for Async Scatter/Gather IO

2022-08-04 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15964.
-
Fix Version/s: 3.3.9
   Resolution: Fixed

> Add S3A support for Async Scatter/Gather IO
> ---
>
> Key: HADOOP-15964
> URL: https://issues.apache.org/jira/browse/HADOOP-15964
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Mukund Thakur
>Priority: Major
> Fix For: 3.3.9
>
>
> HADOOP-11867 is proposing adding a new scatter/gather IO API.
> For an object store to take advantage of it, it should be doing things like
> * coalescing reads even with a gap between them
> * choosing an optimal ordering of requests
> * submitting reads into the executor pool/using any async API provided by the 
> FS.
> * detecting overlapping reads (and then what?)
> * switching to HTTP 2 where supported
> Do this for S3A



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18379) rebase feature/HADOOP-18028-s3a-prefetch to trunk

2022-08-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18379.
-
Resolution: Done

> rebase feature/HADOOP-18028-s3a-prefetch to trunk
> -
>
> Key: HADOOP-18379
> URL: https://issues.apache.org/jira/browse/HADOOP-18379
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> rebase to trunk, fix conflicts and tests, force push



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18386) ITestS3SelectLandsat timeout after 10 minutes

2022-08-02 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18386:
---

 Summary: ITestS3SelectLandsat timeout after 10 minutes
 Key: HADOOP-18386
 URL: https://issues.apache.org/jira/browse/HADOOP-18386
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3, test
Affects Versions: 3.4.0
Reporter: Steve Loughran


timeout doing a full read of the s3 select file through the gzip codec.

{code}
[ERROR] 
testSelectSeekFullLandsat(org.apache.hadoop.fs.s3a.select.ITestS3SelectLandsat) 
 Time elapsed: 600.006 s  <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 60 
milliseconds
at java.lang.Throwable.getStackTraceElement(Native Method)
at java.lang.Throwable.getOurStackTrace(Throwable.java:828)
at java.lang.Throwable.getStackTrace(Throwable.java:817)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.log4j.spi.LocationInfo.(LocationInfo.java:139)
at 
org.apache.log4j.spi.LoggingEvent.getLocationInformation(LoggingEvent.java:253)
at 
org.apache.log4j.helpers.PatternParser$LocationPatternConverter.convert(PatternParser.java:500)
at 
org.apache.log4j.helpers.PatternConverter.format(PatternConverter.java:65)
at org.apache.log4j.PatternLayout.format(PatternLayout.java:506)
at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:310)
at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
at 
org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
at org.apache.log4j.Category.callAppenders(Category.java:206)
at org.apache.log4j.Category.forcedLog(Category.java:391)
at org.apache.log4j.Category.log(Category.java:856)
at org.slf4j.impl.Log4jLoggerAdapter.debug(Log4jLoggerAdapter.java:230)
at org.apache.hadoop.util.DurationInfo.close(DurationInfo.java:101)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:123)
at 
org.apache.hadoop.fs.s3a.select.SelectInputStream.read(SelectInputStream.java:246)
at 
org.apache.hadoop.fs.s3a.select.SelectInputStream.seek(SelectInputStream.java:324)
at 
org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:73)
at 
org.apache.hadoop.fs.s3a.select.AbstractS3SelectTest.seek(AbstractS3SelectTest.java:701)
at 
org.apache.hadoop.fs.s3a.select.ITestS3SelectLandsat.testSelectSeekFullLandsat(ITestS3SelectLandsat.java:427)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.j

{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18385) ITestS3ACannedACLs failure; not in a span

2022-08-02 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18385:
---

 Summary: ITestS3ACannedACLs failure; not in a span
 Key: HADOOP-18385
 URL: https://issues.apache.org/jira/browse/HADOOP-18385
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Reporter: Steve Loughran


seen in a test of the prefetcn feature branch, but it looks more like this has 
been lurking for a long time, or just that some code change has moved the api 
call out of a span.


{code}
[INFO] Running org.apache.hadoop.fs.s3a.ITestS3ACannedACLs
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.592 s 
<<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3ACannedACLs
[ERROR] testCreatedObjectsHaveACLs(org.apache.hadoop.fs.s3a.ITestS3ACannedACLs) 
 Time elapsed: 0.591 s  <<< ERROR!
org.apache.hadoop.fs.s3a.audit.AuditFailureException: 
dbb71c86-e022-4b76-99cf-c1f64dd21389-00013058 unaudited operation executing a 
request outside an audit span 
{com.amazonaws.services.s3.model.GetObjectAclRequest size=0, mutating=true}
at 
org.apache.hadoop.fs.s3a.ITestS3ACannedACLs.assertObjectHasLoggingGrant(ITestS3ACannedACLs.java:94)
at 
org.apache.hadoop.fs.s3a.ITestS3ACannedACLs.testCreatedObjectsHaveACLs(ITestS3ACannedACLs.java:69)
{code}


fix is trivial: do the operation within a span



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18384) ITestS3AFileSystemStatistic failure in prefetch feature branch

2022-08-02 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18384:
---

 Summary: ITestS3AFileSystemStatistic failure in prefetch feature 
branch
 Key: HADOOP-18384
 URL: https://issues.apache.org/jira/browse/HADOOP-18384
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Reporter: Steve Loughran


testing the rebased prefetch feature branch; got a failure in 
ITestS3AFileSystemStatistic
 
{code}
tics.ITestS3AFileSystemStatistic
[ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.489 s 
<<< FAILURE! - in 
org.apache.hadoop.fs.s3a.statistics.ITestS3AFileSystemStatistic
[ERROR] 
testBytesReadWithStream(org.apache.hadoop.fs.s3a.statistics.ITestS3AFileSystemStatistic)
  Time elapsed: 1.489 s  <<< FAILURE!
java.lang.AssertionError: Mismatch in number of FS bytes read by InputStreams 
expected:<2048> but was:<69537130>
at 
org.apache.hadoop.fs.s3a.statistics.ITestS3AFileSystemStatistic.testBytesReadWithStream(ITestS3AFileSystemStatistic.java:72)


{code}
that;s 64MB + ~237 kb, the kind of values you would get from prefetching

but, prefetch was disabled in this test run.

maybe its just the fs stats aren't being reset between test cases



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18380) fs.s3a.prefetch.block.size to be read through longBytesOption

2022-07-28 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18380:
---

 Summary: fs.s3a.prefetch.block.size to be read through 
longBytesOption
 Key: HADOOP-18380
 URL: https://issues.apache.org/jira/browse/HADOOP-18380
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Steve Loughran


use   {{longBytesOption(fs.s3a.prefetch.block.size)}}

this allows for unitgs like M to be used, and is consistent with the rest of 
the s3a size params. also sets a minimum size to prevent negatives values



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18379) rebase feature/HADOOP-18028-s3a-prefetch to trunk

2022-07-28 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18379:
---

 Summary: rebase feature/HADOOP-18028-s3a-prefetch to trunk
 Key: HADOOP-18379
 URL: https://issues.apache.org/jira/browse/HADOOP-18379
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Steve Loughran


rebase to trunk, fix conflicts and tests, force push



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18377) hadoop-aws maven build to add a prefetch profile to run all tests with prefetching

2022-07-28 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18377:
---

 Summary: hadoop-aws maven build to add a prefetch profile to run 
all tests with prefetching
 Key: HADOOP-18377
 URL: https://issues.apache.org/jira/browse/HADOOP-18377
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Reporter: Steve Loughran


Add a prefetch profile to the hadoop-aws build so tests run with prefetching 
on, similar to how the markers option does

makes it easy to test everything with prefetching on/off without editing xml 
files.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18182) S3File to store reference to active S3Object in a field.

2022-07-28 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18182.
-
Resolution: Not A Problem

> S3File to store reference to active S3Object in a field.
> 
>
> Key: HADOOP-18182
> URL: https://issues.apache.org/jira/browse/HADOOP-18182
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Bhalchandra Pandit
>Priority: Major
>
> HADOOP-17338 showed us how recent {{S3Object.finalize()}} can call 
> stream.close() and so close an active stream if a GC happens during a read. 
> replicate the same fix here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18344) AWS SDK update to 1.12.262 to address jackson CVE-2018-7489

2022-07-28 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18344.
-
Fix Version/s: 3.4.0
   3.3.4
   Resolution: Fixed

> AWS SDK update to 1.12.262 to address jackson  CVE-2018-7489
> 
>
> Key: HADOOP-18344
> URL: https://issues.apache.org/jira/browse/HADOOP-18344
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0, 3.3.4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
>  yet another jackson CVE in aws sdk
> https://github.com/apache/hadoop/pull/4491/commits/5496816b472473eb7a9c174b7d3e69b6eee1e271
> maybe we need to have a list of all shaded jackson's we get on the CP and 
> have a process of upgrading them all at the same time



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18372) ILoadTestS3ABulkDeleteThrottling failing

2022-07-27 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18372.
-
Fix Version/s: 3.3.9
   Resolution: Fixed

> ILoadTestS3ABulkDeleteThrottling failing
> 
>
> Key: HADOOP-18372
> URL: https://issues.apache.org/jira/browse/HADOOP-18372
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Ahmar Suhail
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.9
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> the test ILoadTestS3ABulkDeleteThrottling; looks like the fs config is being 
> set up too late in the test suite. it should be moved from setup to createConf



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18374) DistCP: Aggregate IOStatistics Counters in MapReduce Counters

2022-07-27 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18374:
---

 Summary: DistCP: Aggregate IOStatistics Counters in MapReduce 
Counters
 Key: HADOOP-18374
 URL: https://issues.apache.org/jira/browse/HADOOP-18374
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: tools/distcp
Affects Versions: 3.3.9
Reporter: Steve Loughran
Assignee: Mehakmeet Singh


Distcp can collect IOStatisticsContext counter values and report them to the 
console. it can't do the timings in min/mean/max though, as there's no way to 
aggregate them properly.

# Publish statistics to MapReduce counters in the tasks within 
CopyMapper.copyFileWithRetry(). 
# The counters will be automatically logged in Job.monitorAndPrintJob() when 
DistCp is executed with the -verbose option; no need for changes there.
# We could also publish the iOStatistic means by publishing sample count and 
total sum as two separate counters
# In AbstractContractDistCpTest, add an override point for subclasses to list 
which metrics they will issue; assert that values are generated.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17806) Add ListWithIOStats wrapper to return IOStats from a list.

2022-07-27 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17806.
-
Resolution: Won't Fix

lists which update context iostatistics give you this without the need for new 
APIs

> Add ListWithIOStats wrapper to return IOStats from a list.
> -
>
> Key: HADOOP-17806
> URL: https://issues.apache.org/jira/browse/HADOOP-17806
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Priority: Major
>
> Just as RemoteIterators now adds the ability to add an IOStats result, it'd 
> be handy to do the same for a java.util.List. This makes it possible to 
> return IO stats from static methods which return them



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17553) FileSystem.close() to optionally log IOStats; save to local dir

2022-07-27 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17553.
-
Resolution: Won't Fix

> FileSystem.close() to optionally log IOStats; save to local dir
> ---
>
> Key: HADOOP-17553
> URL: https://issues.apache.org/jira/browse/HADOOP-17553
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Mehakmeet Singh
>Priority: Major
>
> We could save the IOStats to a local temp dir as JSON (the snapshot is 
> designed to be serializable, even has a test), with a unique name 
> (iostats-stevel-s3a-bucket1-timestamp-random#.json ... etc). 
> We can collect these (Rajesh can, anyway), and then
> * look for load on a specific bucket
> * look what happened at a specific time
> The best bit: the IOStatisticsSnapshot aggregates counters, min/max/mean, so 
> you could merge iostats-*-s3a-bucket1-*.json to get the IOStats of all 
> principals working with a given bucket
> This will be local, so low cost, low cost enough we could turn it on in 
> production. All that's needed is collection of the stats from the local hosts 
> (or they write to a shared mounted volume)
> We will need some "hadoop iostats merge" command to take multiple files and 
> merge them all together; print to screen or save to a new file. 
> Straightforward as all the load and merge code is present.
> Needs
> * logging in FS.close
> * new iostats CLI + docs, tests
> * extend IOStatisticsSnapshot with list of  options for use 
> in annotating saved logs (hostname, principal, jobID, ...). Don't know how to 
> merge these.
> If we are going to add a new context map to the IOStatisticsSnapshot then we 
> MUST update it before 3.3.1 ships so as to avoid breaking the serialization 
> format on the next release, especially the java one. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18373) hadoop-aws tests to enable iOStatisticsContext

2022-07-27 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18373:
---

 Summary: hadoop-aws tests to enable iOStatisticsContext
 Key: HADOOP-18373
 URL: https://issues.apache.org/jira/browse/HADOOP-18373
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.3.9
Reporter: Steve Loughran
Assignee: Mehakmeet Singh


edit core-site.xml in hadoop-aws/test/resources to always collect context 
iOStatistics

This helps qualify the code
{code}
  
fs.thread.level.iostatistics.enabled
true
  

{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18372) ILoadTestS3ABulkDeleteThrottling failing!+

2022-07-27 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18372:
---

 Summary: ILoadTestS3ABulkDeleteThrottling failing!+
 Key: HADOOP-18372
 URL: https://issues.apache.org/jira/browse/HADOOP-18372
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.4.0
Reporter: Steve Loughran


the test ILoadTestS3ABulkDeleteThrottling; looks like the fs config is being 
set up too late in the test suite. it should be moved from setup to createConf



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18371) s3a FS init logs at warn if fs.s3a.create.storage.class is unset

2022-07-27 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18371:
---

 Summary: s3a FS init logs at warn if fs.s3a.create.storage.class 
is unset
 Key: HADOOP-18371
 URL: https://issues.apache.org/jira/browse/HADOOP-18371
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.9
Reporter: Steve Loughran
Assignee: Monthon Klongklaew


if you don't have an s3a storage class set in {{fs.s3a.create.storage.class}}, 
then whenever you create an S3A FS instance, it logs at warn

{code}

bin/hadoop s3guard bucket-info $BUCKET

2022-07-27 11:53:11,239 [main] INFO  Configuration.deprecation 
(Configuration.java:logDeprecation(1459)) - fs.s3a.server-side-encryption.key 
is deprecated. Instead, use fs.s3a.encryption.key
2022-07-27 11:53:11,240 [main] INFO  Configuration.deprecation 
(Configuration.java:logDeprecation(1459)) - 
fs.s3a.server-side-encryption-algorithm is deprecated. Instead, use 
fs.s3a.encryption.algorithm
2022-07-27 11:53:11,396 [main] WARN  s3a.S3AFileSystem 
(S3AFileSystem.java:createRequestFactory(1004)) - Unknown storage class 
property fs.s3a.create.storage.class: ; falling back to default storage class
2022-07-27 11:53:11,839 [main] INFO  impl.DirectoryPolicyImpl 
(DirectoryPolicyImpl.java:getDirectoryPolicy(189)) - Directory markers will be 
kept
Filesystem s3a://stevel-london
Location: eu-west-2


{code}

note, this is why part of quaifying an sdk update involves looking at the logs 
and running the CLI commands by hand...you see if new messages have crept in



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18350) Support for hadoop-aws with aws-java-sdk-bundle with version greater than 1.12.220

2022-07-27 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18350.
-
Resolution: Duplicate

> Support for hadoop-aws with aws-java-sdk-bundle with version greater than 
> 1.12.220
> --
>
> Key: HADOOP-18350
> URL: https://issues.apache.org/jira/browse/HADOOP-18350
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: fs/s3
>Reporter: Bilna
>Priority: Major
>
> There are CVEs like  CVE-2021-37137  and many, listed from 
> aws-java-sdk-bundle with version 1.11.375 and the fix is available in 
> versions higher than 1.12.220. It will be great if we have a hadoop-aws with 
> aws-java-sdk-bundle.jar with latest version. Will you be able to provide the 
> same? If so may I know approximately when can I expect it?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17461) Add thread-level IOStatistics Context

2022-07-27 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17461.
-
Fix Version/s: 3.3.9
   Resolution: Fixed

> Add thread-level IOStatistics Context
> -
>
> Key: HADOOP-17461
> URL: https://issues.apache.org/jira/browse/HADOOP-17461
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.9
>
>  Time Spent: 11h 20m
>  Remaining Estimate: 0h
>
> For effective reporting of the iostatistics of individual worker threads, we 
> need a thread-level context which IO components update.
> * this contact needs to be passed in two background thread forming work on 
> behalf of a task.
> * IO Components (streams, iterators, filesystems) need to update this context 
> statistics as they perform work
> * Without double counting anything.
> I imagine a ThreadLocal IOStatisticContext which will be updated in the 
> FileSystem API Calls. This context MUST be passed into the background threads 
> used by a task, so that IO is correctly aggregated.
> I don't want streams, listIterators  to do the updating as there is more 
> risk of double counting. However, we need to see their statistics if we want 
> to know things like "bytes discarded in backwards seeks". And I don't want to 
> be updating a shared context object on every read() call.
> If all we want is store IO (HEAD, GET, DELETE, list performance etc) then the 
> FS is sufficient. 
> If we do want the stream-specific detail, then I propose
> * caching the context in the constructor
> * updating it only in close() or unbuffer() (as we do from S3AInputStream to 
> S3AInstrumenation)
> * excluding those we know the FS already collects.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18367) S3A prefetching to update IOStatisticsContext

2022-07-26 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18367:
---

 Summary: S3A prefetching to update IOStatisticsContext
 Key: HADOOP-18367
 URL: https://issues.apache.org/jira/browse/HADOOP-18367
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Steve Loughran


Once HADOOP-17461 is in, the S3A prefetching stream should update the 
IOStatisticsContext of the thread in which it was constructed (doing so in 
close() is sufficient).





--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18190) Collect IOStatistics during S3A prefetching

2022-07-26 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18190.
-
Resolution: Fixed

> Collect IOStatistics during S3A prefetching 
> 
>
> Key: HADOOP-18190
> URL: https://issues.apache.org/jira/browse/HADOOP-18190
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Ahmar Suhail
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> There is a lot more happening in reads, so there's a lot more data to collect 
> and publish in IO stats for us to view in a summary at the end of processes 
> as well as get from the stream while it is active.
> Some useful ones would seem to be:
> counters
>  * is in memory. using 0 or 1 here lets aggregation reports count total #of 
> memory cached files.
>  * prefetching operations executed
>  * errors during prefetching
> gauges
>  * number of blocks in cache
>  * total size of blocks
>  * active prefetches
> + active memory used
> duration tracking count/min/max/ave
>  * time to fetch a block
>  * time queued before the actual fetch begins
>  * time a reader is blocked waiting for a block fetch to complete
> and some info on cache use itself
>  * number of blocks discarded unread
>  * number of prefetched blocks later used
>  * number of backward seeks to a prefetched block
>  * number of forward seeks to a prefetched block
> the key ones I care about are
>  # memory consumption
>  # can we determine if cache is working (reads with cache hit) and when it is 
> not (misses, wasted prefetches)
>  # time blocked on executors
> The stats need to be accessible on a stream even when closed, and aggregated 
> into the FS. once we get per-thread stats contexts we can publish there too 
> and collect in worker threads for reporting in task commits



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18344) AWS SDK update to address

2022-07-18 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18344:
---

 Summary: AWS SDK update to address 
 Key: HADOOP-18344
 URL: https://issues.apache.org/jira/browse/HADOOP-18344
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.4.0, 3.3.4
Reporter: Steve Loughran


 yet another jackson CVE in aws sdk
https://github.com/apache/hadoop/pull/4491/commits/5496816b472473eb7a9c174b7d3e69b6eee1e271

maybe we need to have a list of all shaded jackson's we get on the CP and have 
a process of upgrading them all at the same time



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18332) remove rs-api dependency by downgrading jackson to 2.12.7

2022-07-17 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18332.
-
Fix Version/s: 3.3.4
 Release Note: Downgrades Jackson from 2.13.2 to 2.12.7 to fix class 
conflicts in downstream projects. This version of jackson does contain the fix 
for CVE-2020-36518.  (was: Downgrades Jackson from 2.13.2 to 2.12.7 to fix 
Class conflicts in downstream projects)
   Resolution: Fixed

> remove rs-api dependency by downgrading jackson to 2.12.7
> -
>
> Key: HADOOP-18332
> URL: https://issues.apache.org/jira/browse/HADOOP-18332
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.4
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> This jsr311-api jar seems to conflict with newly added rs-api jar dependency 
> - they have many of the same classes (but conflicting copies) - jersey-core 
> 1.19 needs jsr311-api to work properly (and fails if rs-api used instead)
> * https://mvnrepository.com/artifact/javax.ws.rs/jsr311-api
> * https://mvnrepository.com/artifact/javax.ws.rs/javax.ws.rs-api
> Seems we will need to downgrade jackson to 2.12.7 because of jax-rs 
> compatibility issues in jackson 2.13 (see 
> https://github.com/FasterXML/jackson-jaxrs-providers/issues/134)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18254) Add in configuration option to enable prefetching

2022-07-15 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18254.
-
Resolution: Fixed

> Add in configuration option to enable prefetching
> -
>
> Key: HADOOP-18254
> URL: https://issues.apache.org/jira/browse/HADOOP-18254
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Ahmar Suhail
>Assignee: Ahmar Suhail
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Currently prefetching is enabled by default, we should instead add in a 
> config option to enable it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18231) tests in ITestS3AInputStreamPerformance are failing

2022-07-15 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18231.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> tests in ITestS3AInputStreamPerformance are failing 
> 
>
> Key: HADOOP-18231
> URL: https://issues.apache.org/jira/browse/HADOOP-18231
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmar Suhail
>Assignee: Ahmar Suhail
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> The following tests are failing when prefetching is enabled:
> testRandomIORandomPolicy - expects stream to be opened 4 times (once for 
> every random read), but prefetching will only open twice. 
> testDecompressionSequential128K - expects stream to be opened once, but 
> prefetching will open once for each block the file has. landsat file used in 
> the test has size 42MB, prefetching block size = 8MB, expected open count is 
> 6.
>  testReadWithNormalPolicy - same as above. 
> testRandomIONormalPolicy - executes random IO, but with a normal policy. 
> S3AInputStream will abort the stream and change the policy, prefetching 
> handles random IO by caching blocks so doesn't do any of that. 
> testRandomReadOverBuffer - multiple assertions failing here, also depends a 
> lot on readAhead values, not very relevant for prefetching



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18339) S3A storage class option only picked up when buffering writes to disk

2022-07-15 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18339:
---

 Summary: S3A storage class option only picked up when buffering 
writes to disk
 Key: HADOOP-18339
 URL: https://issues.apache.org/jira/browse/HADOOP-18339
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 3.3.9
Reporter: Steve Loughran


when you switch s3a output stream buffering to heap or byte buffer, the storage 
class option isn't added to the put request


{code}

  
fs.s3a.fast.upload.buffer
bytebuffer
  

{code}

and the ITestS3AStorageClass tests fail.
{code}

java.lang.AssertionError: [Storage class of object 
s3a://stevel-london/test/testCreateAndCopyObjectWithStorageClassGlacier/file1] 
Expecting:
 
to be equal to:
 <"glacier">
ignoring case considerations

at 
org.apache.hadoop.fs.s3a.ITestS3AStorageClass.assertObjectHasStorageClass(ITestS3AStorageClass.java:215)
at 
org.apache.hadoop.fs.s3a.ITestS3AStorageClass.testCreateAndCopyObjectWithStorageClassGlacier(ITestS3AStorageClass.java:129)


{code}

we noticed this in a code review; the request factory only sets the option when 
the source is a file, not memory.

proposed: parameterize the test suite on disk/byte buffer, then fix




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18074) Partial/Incomplete groups list can be returned in LDAP groups lookup

2022-07-14 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18074.
-
Fix Version/s: 3.4.0
   3.3.5
   Resolution: Fixed

> Partial/Incomplete groups list can be returned in LDAP groups lookup
> 
>
> Key: HADOOP-18074
> URL: https://issues.apache.org/jira/browse/HADOOP-18074
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Philippe Lanoe
>Assignee: Larry McCay
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.5
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Hello,
> The  
> {code:java}
> Set doGetGroups(String user, int goUpHierarchy) {code}
> method in
> [https://github.com/apache/hadoop/blob/b27732c69b114f24358992a5a4d170bc94e2ceaf/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java#L476]
> Looks like having an issue if in the middle of the loop a *NamingException* 
> is caught:
> The groups variable is not reset in the catch clause and therefore the 
> fallback lookup cannot be executed (when goUpHierarchy==0 at least):
> ||
> {code:java}
> if (groups.isEmpty() || goUpHierarchy > 0) {
> groups = lookupGroup(result, c, goUpHierarchy);
> }
> {code}
>  
> Consequence is that only a partial list of groups is returned, which is not 
> correct.
> Following options could be used as solution:
>  * Reset the group to an empty list in the catch clause, to trigger the 
> fallback query.
>  * Add an option flag to enable ignoring groups with Naming Exception (since 
> they are not groups most probably)
> Independently, would any issue also occur (and therefore full list cannot be 
> returned) in the first lookup as well as in the fallback query, the method 
> should/could(with option flag) throw an Exception, because in some scenario 
> accuracy is important.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18336) tag FSDataInputStream.getWrappedStream() @Public/@Stable

2022-07-13 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18336.
-
Fix Version/s: 3.4.0
   3.3.9
   Resolution: Fixed

merged to 3.3 and above. no need to backport further.

> tag FSDataInputStream.getWrappedStream() @Public/@Stable
> 
>
> Key: HADOOP-18336
> URL: https://issues.apache.org/jira/browse/HADOOP-18336
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.3
>Reporter: Steve Loughran
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> PARQUET-2134 shows external code is calling 
> FSDataInputStream.getWrappedStream()
> tag ase @Public/@Stable as it has been stable and we should acknowledge that 
> use & know not to break it



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18338) Unable to access data from S3 bucket over a vpc endpoint - 400 bad request

2022-07-13 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18338.
-
Resolution: Not A Problem

change the endpoint and s3a doesn't know what region to sign requests with.

see HADOOP-17705 and set fs.s3a.bucket.region

> Unable to access data from S3 bucket over a vpc endpoint - 400 bad request
> --
>
> Key: HADOOP-18338
> URL: https://issues.apache.org/jira/browse/HADOOP-18338
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, fs/s3
>Reporter: Aarti
>Priority: Major
> Attachments: spark_s3.txt, spark_s3_vpce_error.txt
>
>
> We are trying to write to S3 bucket which has policy with specific IAM Users, 
> SSE and endpoint.  So this bucket has 2 endpoints mentioned in policy : 
> gateway endpoint and interface endpoint.
>  
> When we use gateway endpoint which is general one: 
> [https://s3.us-east-1.amazonaws.com|https://s3.us-east-1.amazonaws.com/] => 
> spark code executes successfully and writes to S3 bucket
> But when we use interface endpoint (which we have to use ideally): 
> [https://bucket.vpce-<>.s3.us-east-1.vpce.amazonaws.com|https://bucket.vpce-%3C%3E.s3.us-east-1.vpce.amazonaws.com/]
>  => spark code throws an error as :
>  
> py4j.protocol.Py4JJavaError: An error occurred while calling o91.save.
> : org.apache.hadoop.fs.s3a.AWSBadRequestException: doesBucketExist on  NAME>: com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request 
> (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request 
> ID: BA67GFNR0Q127VFM; S3 Extended Request ID: 
> BopO6Cn1hNzXdWh89hZlnl/QyTJef/1cxmptuP6f4yH7tqfMO36s/7mF+q8v6L5+FmYHXbFdEss=; 
> Proxy: null), S3 Extended Request ID: 
> BopO6Cn1hNzXdWh89hZlnl/QyTJef/1cxmptuP6f4yH7tqfMO36s/7mF+q8v6L5+FmYHXbFdEss=:400
>  Bad Request: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 
> 400 Bad Request; Request ID: BA67GFNR0Q127VFM; S3 Extended Request ID: 
> BopO6Cn1hNzXdWh89hZlnl/QyTJef/1cxmptuP6f4yH7tqfMO36s/7mF+q8v6L5+FmYHXbFdEss=; 
> Proxy: null)
>  
> Attaching the pyspark code and exception trace
>   [^spark_s3.txt]
> ^[^spark_s3_vpce_error.txt]^



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18217) shutdownhookmanager should not be multithreaded (deadlock possible)

2022-07-13 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18217.
-
Fix Version/s: 3.4.0
   3.3.9
   Resolution: Fixed

committed to branch-3.3 and above

> shutdownhookmanager should not be multithreaded (deadlock possible)
> ---
>
> Key: HADOOP-18217
> URL: https://issues.apache.org/jira/browse/HADOOP-18217
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.10.1
> Environment: linux, windows, any version
>Reporter: Catherinot Remi
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
> Attachments: wtf.java
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> the ShutdownHookManager class uses an executor to run hooks to have a 
> "timeout" notion around them. It does this using a single threaded executor. 
> It can leads to deadlock leaving a never-shutting-down JVM with this 
> execution flow:
>  * JVM need to exit (only daemon threads remaining or someone called 
> System.exit)
>  * ShutdowHookManager kicks in
>  * SHMngr executor start running some hooks
>  * SHMngr executor thread kicks in and, as a side effect, run some code from 
> one of the hook that calls System.exit (as a side effect from an external lib 
> for example)
>  * the executor thread is waiting for a lock because another thread already 
> entered System.exit and has its internal lock, so the executor never returns.
>  * SHMngr never returns
>  * 1st call to System.exit never returns
>  * JVM stuck
>  
> using an executor with a single thread does "fake" timeouts (the task keeps 
> running, you can interrupt it but until it stumble upon some piece of code 
> that is interruptible (like an IO) it will keep running) especially since the 
> executor is a single threaded one. So it has this bug for example :
>  * caller submit 1st hook (bad one that would need 1 hour of runtime and that 
> cannot be interrupted)
>  * executor start 1st hook
>  * caller of the future 1st hook result timeout
>  * caller submit 2nd hook
>  * bug : 1 hook still running, 2nd hook triggers a timeout but never got the 
> chance to run anyway, so 1st faulty hook makes it impossible for any other 
> hook to have a chance to run, so running hooks in a single separate thread 
> does not allow to run other hooks in parallel to long ones.
>  
> If we really really want to timeout the JVM shutdown, even accepting maybe 
> dirty shutdown, it should rather handle the hooks inside the initial thread 
> (not spawning new one(s) so not triggering the deadlock described on the 1st 
> place) and if a timeout was configured, only spawn a single parallel daemon 
> thread that sleeps the timeout delay, and then use Runtime.halt (which bypass 
> the hook system so should not trigger the deadlock). If the normal 
> System.exit ends before the timeout delay everything is fine. If the 
> System.exit took to much time, the JVM is killed and so the reason why this 
> multithreaded shutdown hook implementation was created is satisfied (avoding 
> having hanging JVMs)
>  
> Had the bug with both oracle and open jdk builds, all in 1.8 major version. 
> hadoop 2.6 and 2.7 did not have the issue because they do not run hooks in 
> another thread
>  
> Another solution is of course to configure the timeout AND to have as many 
> threads as needed to run the hooks so to have at least some gain to offset 
> the pain of the dealock scenario
>  
> EDIT: added some logs and reproduced the problem. in fact it is located after 
> triggering all the hook entries and before shutting down the executor. 
> Current code, after running the hooks, creates a new Configuration object and 
> reads the configured timeout from it, applies this timeout to shutdown the 
> executor. I sometimes run with a classloader doing remote classloading, 
> Configuration loads its content using this classloader, so when shutting down 
> the JVM and some network error occurs the classloader fails to load the 
> ressources needed by Configuration. So the code crash before shutting down 
> the executor and ends up inside the thread's default uncaught throwable 
> handler, which was calling System.exit, so got stuck, so shutting down the 
> executor never returned, so does the JVM.
> So, forget about the halt stuff (even if it is a last ressort very robust 
> safety net). Still I'll do a small adjustement to the final executor shutdown 
> code to be slightly more robust to even the strangest exceptions/errors it 
> encounters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: 

[jira] [Resolved] (HADOOP-17273) FSDataInput/Output Streams to automatically collect IOStatistics

2022-07-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17273.
-
Resolution: Won't Fix

just use thread level stats from HADOOP-17461

> FSDataInput/Output Streams to automatically collect IOStatistics
> 
>
> Key: HADOOP-17273
> URL: https://issues.apache.org/jira/browse/HADOOP-17273
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Priority: Major
>
> the fs input/output streams to automatically collect stream IO statistics 
> even if the inner classes don't; count invocations and durations
> An issue here becomes "how would you aggregate with the inner stream, 
> *efficiently*. Not sure there, except maybe
> * have some interface to access an updatable IOStatisticsStore which, if the 
> stream implements, says "here is something you can update"
> * wrapper class updates that with counts, durations
> * we extend DynamicIOStatistics for each counter/gauge etc to have a factory 
> which can add new entries on demand (new AtomicLong, etc)
> * so wrapper classes just update their stats, which  updates existing stats 
> or triggers off an on-demand creation of a new entry
> Not so informative in terms of low level details (HTTP/IPC requests and 
> latency, errors, bytes discarded) but would give callers benefits of using 
> the API for HDFS, Ozone, GCS



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17856) JsonSerDeser to collect IOStatistics

2022-07-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17856.
-
Resolution: Won't Fix

no. use the thread stats instead

> JsonSerDeser to collect IOStatistics
> 
>
> Key: HADOOP-17856
> URL: https://issues.apache.org/jira/browse/HADOOP-17856
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Priority: Minor
>
> json deserializer to build stats on reading costs, which can then be 
> collected too to measure cost of ser/deser and of file IO from the 
> input/output streams, if they provide it.
> Allows for committers to report costs better here. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18336) tag FSDataInputStream.getWrappedStream() @Public/@Stable

2022-07-12 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18336:
---

 Summary: tag FSDataInputStream.getWrappedStream() @Public/@Stable
 Key: HADOOP-18336
 URL: https://issues.apache.org/jira/browse/HADOOP-18336
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.3.3
Reporter: Steve Loughran


PARQUET-2134 shows external code is calling FSDataInputStream.getWrappedStream()

tag ase @Public/@Stable as it has been stable and we should acknowledge that 
use & know not to break it



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18307) remove hadoop-cos as a dependency of hadoop-cloud-storage

2022-06-24 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18307.
-
Fix Version/s: 3.3.4
   Resolution: Fixed

> remove hadoop-cos as a dependency of hadoop-cloud-storage
> -
>
> Key: HADOOP-18307
> URL: https://issues.apache.org/jira/browse/HADOOP-18307
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: bulid, fs
>Affects Versions: 3.3.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> to deal without HADOOP-18159 without qualifying an updated cos library, 
> remove it as an explicit dependency of hadoop cloud storage
> it will still be built and published, 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18237) Upgrade Apache Xerces Java to 2.12.2

2022-06-22 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18237.
-
Fix Version/s: 3.4.0
   3.3.4
   Resolution: Fixed

> Upgrade Apache Xerces Java to 2.12.2
> 
>
> Key: HADOOP-18237
> URL: https://issues.apache.org/jira/browse/HADOOP-18237
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ashutosh Gupta
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Description
> https://github.com/advisories/GHSA-h65f-jvqw-m9fj
> There's a vulnerability within the Apache Xerces Java (XercesJ) XML parser 
> when handling specially crafted XML document payloads. This causes, the 
> XercesJ XML parser to wait in an infinite loop, which may sometimes consume 
> system resources for prolonged duration. This vulnerability is present within 
> XercesJ version 2.12.1 and the previous versions.
> References
> [https://nvd.nist.gov/vuln/detail/CVE-2022-23437]
> https://lists.apache.org/thread/6pjwm10bb69kq955fzr1n0nflnjd27dl
> http://www.openwall.com/lists/oss-security/2022/01/24/3
> https://www.oracle.com/security-alerts/cpuapr2022.html



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18293) Release Hadoop 3.3.4 critical fix update

2022-06-22 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18293.
-
Resolution: Duplicate

forgot about this when i created HADOOP-18305; closing

> Release Hadoop 3.3.4 critical fix update
> 
>
> Key: HADOOP-18293
> URL: https://issues.apache.org/jira/browse/HADOOP-18293
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Create a new release off the branch-3.3.3 line with a few more changes
> * wrap up of security changes
> * cut hadoop-cos out of hadoop-cloud-storage as its dependencies break s3a 
> client...reinstate once the updated jar is tested
> * try to get an arm build out tool



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18159) Certificate doesn't match any of the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com]

2022-06-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18159.
-
Fix Version/s: 3.3.9
   Resolution: Fixed

> Certificate doesn't match any of the subject alternative names: 
> [*.s3.amazonaws.com, s3.amazonaws.com]
> --
>
> Key: HADOOP-18159
> URL: https://issues.apache.org/jira/browse/HADOOP-18159
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.1
> Environment: hadoop 3.3.1
> httpclient 4.5.13
> JDK8
>Reporter: André F.
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.9
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Trying to run any job after bumping our Spark version (which is now using 
> Hadoop 3.3.1), lead us to the current exception while reading files on s3:
> {code:java}
> org.apache.hadoop.fs.s3a.AWSClientIOException: getFileStatus on 
> s3a:///.parquet: com.amazonaws.SdkClientException: Unable to 
> execute HTTP request: Certificate for  doesn't match 
> any of the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com]: 
> Unable to execute HTTP request: Certificate for  doesn't match any of 
> the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com] at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:208) at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:170) at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3351)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3185)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.isDirectory(S3AFileSystem.java:4277) 
> at {code}
>  
> {code:java}
> Caused by: javax.net.ssl.SSLPeerUnverifiedException: Certificate for 
>  doesn't match any of the subject alternative names: 
> [*.s3.amazonaws.com, s3.amazonaws.com]
>   at 
> com.amazonaws.thirdparty.apache.http.conn.ssl.SSLConnectionSocketFactory.verifyHostname(SSLConnectionSocketFactory.java:507)
>   at 
> com.amazonaws.thirdparty.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:437)
>   at 
> com.amazonaws.thirdparty.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:384)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376)
>   at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> com.amazonaws.http.conn.ClientConnectionManagerFactory$Handler.invoke(ClientConnectionManagerFactory.java:76)
>   at com.amazonaws.http.conn.$Proxy16.connect(Unknown Source)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
>   at 
> com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1333)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1145)
>   {code}
> We found similar problems in the following tickets but:
>  - https://issues.apache.org/jira/browse/HADOOP-17017 (we don't use `.` in 
> our bucket names)
>  - [https://github.com/aws/aws-sdk-java-v2/issues/1786] (we tried to override 
> it by using `httpclient:4.5.10` or `httpclient:4.5.8`, with no effect).
> We couldn't test it using the native `openssl` configuration due to our 
> setup, so we would like to stick with the java ssl implementation, if 
> possible.
>  



--
This message was sent by 

[jira] [Resolved] (HADOOP-17833) Improve Magic Committer Performance

2022-06-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17833.
-
Fix Version/s: 3.3.4
 Release Note: S3A filesytem's createFile() operation supports an option to 
disable all safety checks when creating a file. Consult the documentation and 
use with care
   Resolution: Fixed

> Improve Magic Committer Performance
> ---
>
> Key: HADOOP-17833
> URL: https://issues.apache.org/jira/browse/HADOOP-17833
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.4
>
>  Time Spent: 14h
>  Remaining Estimate: 0h
>
> Magic committer tasks can be slow because every file created with 
> overwrite=false triggers a HEAD (verify there's no file) and a LIST (that 
> there's no dir). And because of delayed manifestations, it may not behave as 
> expected.
> ParquetOutputFormat is one example of a library which does this.
> we could fix parquet to use overwrite=true, but (a) there may be surprises in 
> other uses (b) it'd still leave the list and (c) do nothing for other formats 
> call
> Proposed: createFile() under a magic path to skip all probes for file/dir at 
> end of path
> Only a single task attempt Will be writing to that directory and it should 
> know what it is doing. If there is conflicting file names and parts across 
> tasks that won't even get picked up at this point. Oh and none of the 
> committers ever check for this: you'll get the last file manifested (s3a) or 
> renamed (file)
> If we skip the checks we will save 2 HTTP requests/file.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18307) remove hadoop-cos as a dependency of hadoop-cloud-storage

2022-06-20 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18307:
---

 Summary: remove hadoop-cos as a dependency of hadoop-cloud-storage
 Key: HADOOP-18307
 URL: https://issues.apache.org/jira/browse/HADOOP-18307
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: bulid, fs
Affects Versions: 3.3.3
Reporter: Steve Loughran
Assignee: Steve Loughran


to deal without HADOOP-18159 without qualifying an updated cos library, remove 
it as an explicit dependency of hadoop cloud storage

it will still be built and published, 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18305) Release Hadoop 3.3.4: minor update of hadoop-3.3.3

2022-06-20 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18305:
---

 Summary: Release Hadoop 3.3.4: minor update of hadoop-3.3.3
 Key: HADOOP-18305
 URL: https://issues.apache.org/jira/browse/HADOOP-18305
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 3.3.3
Reporter: Steve Loughran
Assignee: Steve Loughran


Create a Hadoop 3.3.4 release with

* critical fixes
* ARM artifacts as well as the intel ones



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18019) S3AFileSystem.s3GetFileStatus() doesn't find dir markers on minio

2022-06-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18019.
-
Resolution: Won't Fix

> S3AFileSystem.s3GetFileStatus() doesn't find dir markers on minio
> -
>
> Key: HADOOP-18019
> URL: https://issues.apache.org/jira/browse/HADOOP-18019
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.0, 3.3.1, 3.3.2
> Environment: minio s3-compatible storage
>Reporter: Ruslan Dautkhanov
>Priority: Major
>
> Repro code:
> {code:java}
> val conf = new Configuration()  
> conf.set("fs.s3a.endpoint", "http://127.0.0.1:9000;) 
> conf.set("fs.s3a.path.style.access", "true") 
> conf.set("fs.s3a.access.key", "user_access_key") 
> conf.set("fs.s3a.secret.key", "password")  
> val path = new Path("s3a://comcast-test")  
> val fs = path.getFileSystem(conf)  
> fs.mkdirs(new Path("/testdelta/_delta_log"))  
> fs.getFileStatus(new Path("/testdelta/_delta_log")){code}
> Fails with *FileNotFoundException fails* on Minio. The same code works in 
> real S3.
> It also works in Hadoop 3.2 with Minio and earlier versions.
> Only fails on 3.3 and newer Hadoop branches.
> The reason as discovered by [~sadikovi] is actually a more fundamental one - 
> Minio does not have empty directories (sort of), see 
> [https://github.com/minio/minio/issues/2423].
> This works in Hadoop 3.2 because of this infamous "Is this necessary?" block 
> of code
> [https://github.com/apache/hadoop/blob/branch-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L2204-L2223]
> that was removed in Hadoop 3.3 -
> [https://github.com/apache/hadoop/blob/branch-3.3.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L2179]
> and this causes the regression



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18298) Hadoop AWS | Staging committer Multipartupload not completing on minio

2022-06-17 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18298.
-
Resolution: Invalid

> Hadoop AWS | Staging committer Multipartupload not completing on minio
> --
>
> Key: HADOOP-18298
> URL: https://issues.apache.org/jira/browse/HADOOP-18298
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.1
> Environment: minio
>Reporter: Ayush Goyal
>Priority: Major
>
> In Hadoop aws staging 
> committer(org.apache.hadoop.fs.s3a.commit.staging.StagingCommitter), 
> Committer uploads files from local to s3(method- commitTaskInternal) which 
> calls uploadFileToPendingCommit of CommitOperation to upload file using 
> multipart upload.
>  
> Multipart upload consists of three steps-
> 1)Initialise multipartupload.
> 2) Breaks the file to part and upload Parts.
> 3) Merge all the parts of files and finalize multipart.
>  
> In the implementation of uploadFileToPendingCommit, first 2 steps are 
> implemented. However, 3rd part is missing which leads to uploading the parts 
> file but because it is not merged at the end of job no files are there in 
> destination directory.
>  
> S3 logs before implement 3rd steps-
>  
> {code:java}
> 2022-05-30T13:49:31:000 [200 OK] s3.NewMultipartUpload 
> localhost:9000/minio-feature-testing/spark-job/processed/output-parquet-staging-7/part-0-ce0a965f-622a-4950-bb4b-550470883134-c000-b552fb34-6156-4aa8-9085-679ad14fab6e.snappy.parquet?uploads
>   240b:c1d1:123:664f:c5d2:2::               8.677ms      ↑ 137 B ↓ 724 B
> 2022-05-30T13:49:31:000 [200 OK] s3.PutObjectPart 
> localhost:9000/minio-feature-testing/spark-job/processed/output-parquet-staging-7/part-0-ce0a965f-622a-4950-bb4b-550470883134-c000-b552fb34-6156-4aa8-9085-679ad14fab6e.snappy.parquet?uploadId=f3beae8e-3001-48be-9bc4-306b71940e50=1
>   240b:c1d1:123:664f:c5d2:2::                443.156ms    ↑ 51 KiB ↓ 325 B
> 2022-05-30T13:49:32:000 [200 OK] s3.ListObjectsV2 
> localhost:9000/minio-feature-testing/?list-type=2=%2F=2=spark-job%2Fprocessed%2Foutput-parquet-staging-7%2F_SUCCESS%2F=false
>   240b:c1d1:123:664f:c5d2:2::                3.414ms      ↑ 137 B ↓ 646 B
> 2022-05-30T13:49:32:000 [200 OK] s3.PutObject 
> localhost:9000/minio-feature-testing/spark-job/processed/output-parquet-staging-7/_SUCCESS
>  240b:c1d1:123:664f:c5d2:2::                52.734ms     ↑ 8.7 KiB ↓ 380 B
> 2022-05-30T13:49:32:000 [200 OK] s3.DeleteMultipleObjects 
> localhost:9000/minio-feature-testing/?delete  240b:c1d1:123:664f:c5d2:2::     
>            73.954ms     ↑ 350 B ↓ 432 B
> 2022-05-30T13:49:32:000 [404 Not Found] s3.HeadObject 
> localhost:9000/minio-feature-testing/spark-job/processed/output-parquet-staging-7/_temporary
>  240b:c1d1:123:664f:c5d2:2::                2.658ms      ↑ 137 B ↓ 291 B
> 2022-05-30T13:49:32:000 [200 OK] s3.ListObjectsV2 
> localhost:9000/minio-feature-testing/?list-type=2=%2F=2=spark-job%2Fprocessed%2Foutput-parquet-staging-7%2F_temporary%2F=false
>   240b:c1d1:123:664f:c5d2:2::                 4.807ms      ↑ 137 B ↓ 648 B
> 2022-05-30T13:49:32:000 [200 OK] s3.ListMultipartUploads 
> localhost:9000/minio-feature-testing/?uploads=spark-job%2Fprocessed%2Foutput-parquet-staging-7%2F
>   240b:c0e0:102:553e:b4c2:2::               1.081ms      ↑ 137 B ↓ 776 B
> 2022-05-30T13:49:32:000 [404 Not Found] s3.HeadObject 
> localhost:9000/minio-feature-testing/spark-job/processed/output-parquet-staging-7/.spark-staging-ce0a965f-622a-4950-bb4b-550470883134
>  240b:c1d1:123:664f:c5d2:2::                 5.68ms       ↑ 137 B ↓ 291 B
> 2022-05-30T13:49:32:000 [200 OK] s3.ListObjectsV2 
> localhost:9000/minio-feature-testing/?list-type=2=%2F=2=spark-job%2Fprocessed%2Foutput-parquet-staging-7%2F.spark-staging-ce0a965f-622a-4950-bb4b-550470883134%2F=false
>   240b:c1d1:123:664f:c5d2:2::              2.452ms      ↑ 137 B ↓ 689 B
>   {code}
> Here , After s3.PutObjectPart there is no completeMultipartupload call for 
> 3rd step.
>  
> S3 logs after implement 3rd steps-
>  
> {code:java}
> 2022-06-17T10:56:12:000 [200 OK] s3.NewMultipartUpload 
> localhost:9000/minio-feature-testing/spark-job/pm-processed/output-parquet-staging-39/day%3D23/hour%3D16/quarter%3D0/part-4-d0b529ca-112f-43f2-a7dd-44de4db6aa7f-dffa7213-d492-48f9-9e6a-fb08bc81ceeb.c000.snappy.parquet?uploads
>   240b:c1d1:123:664f:c5d2:2::               9.116ms      ↑ 137 B ↓ 750 B
> 2022-06-17T10:56:12:000 [200 OK] s3.NewMultipartUpload 
> localhost:9000/minio-feature-testing/spark-job/pm-processed/output-parquet-staging-39/day%3D23/hour%3D15/quarter%3D45/part-4-d0b529ca-112f-43f2-a7dd-44de4db6aa7f-dffa7213-d492-48f9-9e6a-fb08bc81ceeb.c000.snappy.parquet?uploads
>   240b:c1d1:123:664f:c5d2:2::               9.416ms      

[jira] [Created] (HADOOP-18293) Release Hadoop 3.3.4 critical fix update

2022-06-16 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18293:
---

 Summary: Release Hadoop 3.3.4 critical fix update
 Key: HADOOP-18293
 URL: https://issues.apache.org/jira/browse/HADOOP-18293
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Reporter: Steve Loughran
Assignee: Steve Loughran


Create a new release off the branch-3.3.3 line with a few more changes

* wrap up of security changes
* cut hadoop-cos out of hadoop-cloud-storage as its dependencies break s3a 
client...reinstate once the updated jar is tested




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18292) s3a storage class reduced redundancy breaks s3 select tests

2022-06-15 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18292:
---

 Summary: s3a storage class reduced redundancy breaks s3 select 
tests
 Key: HADOOP-18292
 URL: https://issues.apache.org/jira/browse/HADOOP-18292
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3, test
Affects Versions: 3.4.0
Reporter: Steve Loughran
Assignee: Monthon Klongklaew


when you set your fs client to work with reduced redundancy, the s3 select 
tests fail

probably need to clear the storage class option on the bucket before running 
those suites





--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18287) Provide a shim library for modern FS APIs

2022-06-10 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18287:
---

 Summary: Provide a shim library for modern FS APIs
 Key: HADOOP-18287
 URL: https://issues.apache.org/jira/browse/HADOOP-18287
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Affects Versions: 3.3.0
Reporter: Steve Loughran


Add a shim library to give libraries and applications built against hadoop 3.2 
access to APIs and features in later versions, especially those delivering 
higher performance in cloud deployments. This will give them the ability to 
invoke those APIs when available, so gain from the work everyone has done. Key 
APIs are:

* openFile
* ByteBufferPositionedReadable
* Vectored IO

The library will either downgrade gracefully to existing code (openFile) or 
simply thrown UnsupportedException when invoked -but offer probes for every 
operation before invocation.

This module will compile against hadoop 3.2.0; it will be tested against that 
and later releases.

We can and should release this on a different schedule; though ideally we 
should issue releases in sync with new hadoop releases adding new supported API 
calls.

For that reason I think we could consider having separate git repository for 
it. Verifying that the shim works against hadoop PRs could actually become one 
of our regression tests -indeed, it should become one.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18283) Review s3a prefetching input stream retry code

2022-06-09 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18283:
---

 Summary: Review s3a prefetching input stream retry code
 Key: HADOOP-18283
 URL: https://issues.apache.org/jira/browse/HADOOP-18283
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.4.0
Reporter: Steve Loughran


Need to review S3A prefetching stream retry logic

* no attempt to retry on unrecoverable errors
* do try on recoverable ones
* no wrap of retry by retry.
* annotate classes with Retries annotations to aid the review.

a key concern has to be that transient failure of  prefetch is recovered from; 
things like deleted/shortened file fails properly on the next read call



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18281) Tune S3A storage class support

2022-06-08 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18281:
---

 Summary: Tune S3A storage class support
 Key: HADOOP-18281
 URL: https://issues.apache.org/jira/browse/HADOOP-18281
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 3.3.4
 Environment: 
Followup to HADOOP-12020, with work/review from rebasing HADOOP-17833 atop it.

* Can we merge ITestS3AHugeFilesStorageClass into one of the existing test 
cases? just because it is slow...ideally we want as few of those as possible, 
even if by testing multiple things at the same we break the rules of testing.
* move setting the storage class into
setOptionalMultipartUploadRequestParameters and setOptionalPutRequestParameters
* both newPutObjectRequest() calls to set storage class

Once HADOOP-17833 is in, make this a new option something which can be 
explicitly used in createFile().
I've updated PutObjectOptions to pass a value around, and made sure it gets 
down to to the request factory. that leaves
* setting the storage class from the options {{CreateFileBuilder}}
* testing
* doc update

Reporter: Steve Loughran






--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12020) Support AWS S3 reduced redundancy storage class

2022-06-08 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-12020.
-
Fix Version/s: 3.3.4
   Resolution: Fixed

merged to trunk and 3.3.4. thanks!

> Support AWS S3 reduced redundancy storage class
> ---
>
> Key: HADOOP-12020
> URL: https://issues.apache.org/jira/browse/HADOOP-12020
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
> Environment: Hadoop on AWS
>Reporter: Yann Landrin-Schweitzer
>Assignee: Monthon Klongklaew
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.4
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Amazon S3 uses, by default, the NORMAL_STORAGE class for s3 objects.
> This offers, according to Amazon's material, 99.% reliability.
> For many applications, however, the 99.99% reliability offered by the 
> REDUCED_REDUNDANCY storage class is amply sufficient, and comes with a 
> significant cost saving.
> HDFS, when using the legacy s3n protocol, or the new s3a scheme, should 
> support overriding the default storage class of created s3 objects so that 
> users can take advantage of this cost benefit.
> This would require minor changes of the s3n and s3a drivers, using 
> a configuration property fs.s3n.storage.class to override the default storage 
> when desirable. 
> This override could be implemented in Jets3tNativeFileSystemStore with:
>   S3Object object = new S3Object(key);
>   ...
>   if(storageClass!=null)  object.setStorageClass(storageClass);
> It would take a more complex form in s3a, e.g. setting:
> InitiateMultipartUploadRequest initiateMPURequest =
> new InitiateMultipartUploadRequest(bucket, key, om);
> if(storageClass !=null ) {
> initiateMPURequest = 
> initiateMPURequest.withStorageClass(storageClass);
> }
> and similar statements in various places.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18278) Do not perform a LIST call when creating a file

2022-06-08 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18278.
-
Target Version/s: 3.4.0
  Resolution: Duplicate

We do the check to make sure that apps don't create files over directories. if 
they do, your object store loses a lot of its "filesystemness"; list, rename 
and delete all break.

HEAD doesn't do the validation, and if you create a file with overwrite=false 
we skip that call. Sadly, parquet likes creating files with overwrite=false, it 
does HEAD and LIST, even when writing to task attempt dirs which are 
exclusively for use by single thread and will be completely deleted at the end 
of the job.

The magic committer performance issue HADOOP-17833 and its PR 
https://github.com/apache/hadoop/pull/3289 turns off all the safety checks when 
writing under __magic dirs as we know they are short lived. We don't even check 
if directories have been created under files. 

The same options are available when writing any file, as it contains
HADOOP-15460, S3A FS to add "fs.s3a.create.performance" to the builder file 
creation option set.

{code}
out = fs.createFile(new Path("s3a://bucket/subdir/output.txt")
  .opt("fs.s3a.create.performance", true)
.build();
{code}

If you use this you will get the speed up you want anywhere, but you had a 
better be confident you are not overwriting a directory. See
https://github.com/steveloughran/hadoop/blob/s3/HADOOP-17833-magic-committer-performance/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdataoutputstreambuilder.md#-s3a-specific-options

At the time of writing (june 8 2022) this PR is in critical need of review. 
Please look at the patch review it and make sure it will work for you. This 
will be your opportunity to make sure it is correct before we ship it. You are 
clearly looking at the internals of what we're doing, so your insight will be 
valued. Thanks.

> Do not perform a LIST call when creating a file
> ---
>
> Key: HADOOP-18278
> URL: https://issues.apache.org/jira/browse/HADOOP-18278
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sam Kramer
>Priority: Major
>
> Hello,
> We've noticed that when creating a file, which does not exist in S3, we see 
> an extra LIST call gets issued to see if it's a directory (i.e. if key = 
> "bar", it will issue an object list request for "bar/"). 
> Is this really necessary, shouldn't a HEAD request be sufficient to determine 
> if it actually exists or not? As we're creating 1000s of files, this is quite 
> expensive, as we're effectively doubling our costs for file creation. Curious 
> if others have experienced similar or identical issues, or if there are any 
> workarounds. 
> [https://github.com/apache/hadoop/blob/516a2a8e440378c868ddb02cb3ad14d0d879037f/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L3359-L3369]
>  
> Thanks,
> Sam



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18275) update os-maven-plugin to 1.7.0

2022-06-06 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18275.
-
Fix Version/s: 3.3.4
   Resolution: Fixed

> update os-maven-plugin to 1.7.0
> ---
>
> Key: HADOOP-18275
> URL: https://issues.apache.org/jira/browse/HADOOP-18275
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.4
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> the os-maven-plugin we build with is 1.15; the release is up to 1.17.0
> update this



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



<    1   2   3   4   5   6   7   8   9   10   >