This is an automated email from the ASF dual-hosted git repository. acosentino pushed a commit to branch s3-consumer-minimum-permissions in repository https://gitbox.apache.org/repos/asf/camel.git
commit ab277c90e2f2d32002045365c22baff2b180a2f9 Author: Andrea Cosentino <anco...@gmail.com> AuthorDate: Thu Aug 22 15:40:14 2024 +0200 Camel-AWS components: Providing minimum permissions documentation for services - S3 Consumer Signed-off-by: Andrea Cosentino <anco...@gmail.com> --- .../src/main/docs/aws2-s3-component.adoc | 30 ++++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/components/camel-aws/camel-aws2-s3/src/main/docs/aws2-s3-component.adoc b/components/camel-aws/camel-aws2-s3/src/main/docs/aws2-s3-component.adoc index 7caaad02030..f11e5e6fdd2 100644 --- a/components/camel-aws/camel-aws2-s3/src/main/docs/aws2-s3-component.adoc +++ b/components/camel-aws/camel-aws2-s3/src/main/docs/aws2-s3-component.adoc @@ -356,6 +356,36 @@ A variation to the minimum permissions is related to the usage of Bucket autocre } -------------------------------------------------------------------------------- +=== AWS S3 Consumer minimum permissions + +For making the producer work, you'll need at least GetObject, ListBuckets and DeleteObject permissions. The following policy will be enough: + +[source,json] +-------------------------------------------------------------------------------- +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": "s3:ListBucket", + "Resource": "arn:aws:s3:::*" + }, + { + "Effect": "Allow", + "Action": "s3:GetObject", + "Resource": "arn:aws:s3:::*/*" + }, + { + "Effect": "Allow", + "Action": "s3:DeleteObject", + "Resource": "arn:aws:s3:::*/*" + } + ] +} +-------------------------------------------------------------------------------- + +By Default the consumer will use the deleteAfterRead option, this means the object will be deleted once consumed, this is why the DeleteObject permission is required. + === Streaming Upload mode With the stream mode enabled, users will be able to upload data to S3 without knowing ahead of time the dimension of the data, by leveraging multipart upload.