[ https://issues.apache.org/jira/browse/HADOOP-18830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803573#comment-17803573 ]
ASF GitHub Bot commented on HADOOP-18830: ----------------------------------------- steveloughran commented on PR #6144: URL: https://github.com/apache/hadoop/pull/6144#issuecomment-1878842405 rebased pr with retest. failures unrelated; the signing one has an active pr to fix, the committer one looks like my config is at fault (bucket overrides not being cut) ``` [ERROR] Failures: [ERROR] ITestS3ACommitterFactory.testEverything:115->testInvalidFileBinding:165 Expected a org.apache.hadoop.fs.s3a.commit.PathCommitException to be thrown, but got the result: : FileOutputCommitter{PathOutputCommitter{context=TaskAttemptContextImpl{JobContextImpl{jobId=job_202401050108_0001}; taskId=attempt_202401050108_0001_m_000000_0, status=''}; org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter@61fa8914}; outputPath=s3a://stevel--usw2-az1--x-s3/fork-0001/test/testEverything, workPath=s3a://stevel--usw2-az1--x-s3/fork-0001/test/testEverything/_temporary/1/_temporary/attempt_202401050108_0001_m_000000_0, algorithmVersion=1, skipCleanup=false, ignoreCleanupFailures=false} [ERROR] Errors: [ERROR] ITestCustomSigner.testCustomSignerAndInitializer:135->runStoreOperationsAndVerify:155->lambda$runStoreOperationsAndVerify$0:160 » AWSBadRequest [ERROR] ITestCustomSigner.testCustomSignerAndInitializer:135->runStoreOperationsAndVerify:155->lambda$runStoreOperationsAndVerify$0:160 » AWSBadRequest [INFO] ``` > S3A: Cut S3 Select > ------------------ > > Key: HADOOP-18830 > URL: https://issues.apache.org/jira/browse/HADOOP-18830 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 > Affects Versions: 3.4.0 > Reporter: Steve Loughran > Assignee: Steve Loughran > Priority: Major > Labels: pull-request-available > > getting s3 select to work with the v2 sdk is tricky, we need to add extra > libraries to the classpath beyond just bundle.jar. we can do this but > * AFAIK nobody has ever done CSV predicate pushdown, as it breaks split logic > completely > * CSV is a bad format > * one-line JSON more structured but also way less efficient > ORC/Parquet benefit from vectored IO and work spanning the cluster. > accordingly, I'm wondering what to do about s3 select > # cut? > # downgrade to optional and document the extra classes on the classpath > Option #2 is straightforward and effectively the default. we can also declare > the feature deprecated. > {code} > [ERROR] > testReadLandsatRecordsNoMatch(org.apache.hadoop.fs.s3a.select.ITestS3SelectLandsat) > Time elapsed: 147.958 s <<< ERROR! > java.io.IOException: java.lang.NoClassDefFoundError: > software/amazon/eventstream/MessageDecoder > at > org.apache.hadoop.fs.s3a.select.SelectObjectContentHelper.select(SelectObjectContentHelper.java:75) > at > org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$select$10(WriteOperationHelper.java:660) > at > org.apache.hadoop.fs.store.audit.AuditingFunctions.lambda$withinAuditSpan$0(AuditingFunctions.java:62) > at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:122) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org