This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-kamelets.git


The following commit(s) were added to refs/heads/main by this push:
     new 920b2e0d Kamelets Catalog - Make the description a one liner (#2358)
920b2e0d is described below

commit 920b2e0db4edc416abbbac5b0c589e6b9e899a1a
Author: Andrea Cosentino <[email protected]>
AuthorDate: Wed Jan 15 17:14:25 2025 +0100

    Kamelets Catalog - Make the description a one liner (#2358)
    
    * Kamelets Catalog - Make the description a one liner - AWS S3 Event based 
Source
    
    Signed-off-by: Andrea Cosentino <[email protected]>
    
    * Kamelets Catalog - Make the description a one liner - AWS S3 Sink
    
    Signed-off-by: Andrea Cosentino <[email protected]>
    
    ---------
    
    Signed-off-by: Andrea Cosentino <[email protected]>
---
 .../aws-s3-event-based-source-description.adoc     | 11 +++++++++
 .../ROOT/partials/aws-s3-sink-description.adoc     | 26 ++++++++++++++++++++++
 kamelets/aws-s3-event-based-source.kamelet.yaml    | 10 +--------
 kamelets/aws-s3-sink.kamelet.yaml                  | 11 +--------
 .../aws-s3-event-based-source.kamelet.yaml         | 10 +--------
 .../resources/kamelets/aws-s3-sink.kamelet.yaml    | 11 +--------
 6 files changed, 41 insertions(+), 38 deletions(-)

diff --git 
a/docs/modules/ROOT/partials/aws-s3-event-based-source-description.adoc 
b/docs/modules/ROOT/partials/aws-s3-event-based-source-description.adoc
new file mode 100644
index 00000000..01a9c01a
--- /dev/null
+++ b/docs/modules/ROOT/partials/aws-s3-event-based-source-description.adoc
@@ -0,0 +1,11 @@
+== AWS S3 Event based Source Kamelet Description
+
+=== Authentication methods
+
+Access Key/Secret Key are the basic method for authenticating to the AWS SQS 
Service.
+
+=== Required Setup
+
+To use this Kamelet you'll need to set up Eventbridge on your bucket and 
subscribe Eventbridge bus to an SQS Queue.
+      
+For doing this you'll need to enable Evenbridge notification on your bucket 
and creating a rule on Eventbridge console related to all the events on S3 
bucket and pointing to the SQS Queue specified as parameter in this Kamelet.
diff --git a/docs/modules/ROOT/partials/aws-s3-sink-description.adoc 
b/docs/modules/ROOT/partials/aws-s3-sink-description.adoc
new file mode 100644
index 00000000..dee86f3d
--- /dev/null
+++ b/docs/modules/ROOT/partials/aws-s3-sink-description.adoc
@@ -0,0 +1,26 @@
+== AWS S3 Sink Kamelet Description
+
+=== Authentication methods
+
+In this Kamelet you have the possibility of avoiding the usage of explicit 
static credentials by specifying the useDefaultCredentialsProvider option and 
set it to true.
+
+The order of evaluation for Default Credentials Provider is the following:
+
+ - Java system properties - `aws.accessKeyId` and `aws.secretKey`.
+ - Environment variables - `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`.
+ - Web Identity Token from AWS STS.
+ - The shared credentials and config files.
+ - Amazon ECS container credentials - loaded from the Amazon ECS if the 
environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is set.
+ - Amazon EC2 Instance profile credentials. 
+ 
+You have also the possibility of using Profile Credentials Provider, by 
specifying the useProfileCredentialsProvider option to true and 
profileCredentialsName to the profile name.
+
+Only one of access key/secret key or default credentials provider could be used
+
+For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
+
+=== Optional Headers
+
+In the header, you can optionally set the `file` / `ce-file` property to 
specify the name of the file to upload.
+
+If you do not set the property in the header, the Kamelet uses the exchange ID 
for the file name.
diff --git a/kamelets/aws-s3-event-based-source.kamelet.yaml 
b/kamelets/aws-s3-event-based-source.kamelet.yaml
index 396d3508..a3c52478 100644
--- a/kamelets/aws-s3-event-based-source.kamelet.yaml
+++ b/kamelets/aws-s3-event-based-source.kamelet.yaml
@@ -33,15 +33,7 @@ metadata:
 spec:
   definition:
     title: AWS S3 Event Based Source
-    description: >-
-      Receive data from AWS SQS subscribed to Eventbridge Bus reporting events 
related to an S3 bucket or multiple buckets.
-
-      Access Key/Secret Key are the basic method for authenticating to the AWS
-      SQS Service.
-
-      To use this Kamelet you'll need to set up Eventbridge on your bucket and 
subscribe Eventbridge bus to an SQS Queue.
-      
-      For doing this you'll need to enable Evenbridge notification on your 
bucket and creating a rule on Eventbridge console related to all the events on 
S3 bucket and pointing to the SQS Queue specified as parameter in this Kamelet.
+    description: Receive data from AWS SQS subscribed to Eventbridge Bus 
reporting events related to an S3 bucket or multiple buckets.
     required:
       - accessKey
       - secretKey
diff --git a/kamelets/aws-s3-sink.kamelet.yaml 
b/kamelets/aws-s3-sink.kamelet.yaml
index fd0a2a1d..d73e2f6e 100644
--- a/kamelets/aws-s3-sink.kamelet.yaml
+++ b/kamelets/aws-s3-sink.kamelet.yaml
@@ -31,16 +31,7 @@ metadata:
 spec:
   definition:
     title: "AWS S3 Sink"
-    description: |-
-      Upload data to an Amazon S3 Bucket.
-
-      The basic authentication method for the S3 service is to specify an 
access key and a secret key. These parameters are optional because the Kamelet 
provides a default credentials provider.
-
-      If you use the default credentials provider, the S3 client loads the 
credentials through this provider and doesn't use the basic authentication 
method.
-
-      In the header, you can optionally set the `file` / `ce-partition` 
property to specify the name of the file to upload.
-
-      If you do not set the property in the header, the Kamelet uses the 
exchange ID for the file name.
+    description: Upload data to an Amazon S3 Bucket.
     required:
       - bucketNameOrArn
       - region
diff --git 
a/library/camel-kamelets/src/main/resources/kamelets/aws-s3-event-based-source.kamelet.yaml
 
b/library/camel-kamelets/src/main/resources/kamelets/aws-s3-event-based-source.kamelet.yaml
index 396d3508..a3c52478 100644
--- 
a/library/camel-kamelets/src/main/resources/kamelets/aws-s3-event-based-source.kamelet.yaml
+++ 
b/library/camel-kamelets/src/main/resources/kamelets/aws-s3-event-based-source.kamelet.yaml
@@ -33,15 +33,7 @@ metadata:
 spec:
   definition:
     title: AWS S3 Event Based Source
-    description: >-
-      Receive data from AWS SQS subscribed to Eventbridge Bus reporting events 
related to an S3 bucket or multiple buckets.
-
-      Access Key/Secret Key are the basic method for authenticating to the AWS
-      SQS Service.
-
-      To use this Kamelet you'll need to set up Eventbridge on your bucket and 
subscribe Eventbridge bus to an SQS Queue.
-      
-      For doing this you'll need to enable Evenbridge notification on your 
bucket and creating a rule on Eventbridge console related to all the events on 
S3 bucket and pointing to the SQS Queue specified as parameter in this Kamelet.
+    description: Receive data from AWS SQS subscribed to Eventbridge Bus 
reporting events related to an S3 bucket or multiple buckets.
     required:
       - accessKey
       - secretKey
diff --git 
a/library/camel-kamelets/src/main/resources/kamelets/aws-s3-sink.kamelet.yaml 
b/library/camel-kamelets/src/main/resources/kamelets/aws-s3-sink.kamelet.yaml
index fd0a2a1d..d73e2f6e 100644
--- 
a/library/camel-kamelets/src/main/resources/kamelets/aws-s3-sink.kamelet.yaml
+++ 
b/library/camel-kamelets/src/main/resources/kamelets/aws-s3-sink.kamelet.yaml
@@ -31,16 +31,7 @@ metadata:
 spec:
   definition:
     title: "AWS S3 Sink"
-    description: |-
-      Upload data to an Amazon S3 Bucket.
-
-      The basic authentication method for the S3 service is to specify an 
access key and a secret key. These parameters are optional because the Kamelet 
provides a default credentials provider.
-
-      If you use the default credentials provider, the S3 client loads the 
credentials through this provider and doesn't use the basic authentication 
method.
-
-      In the header, you can optionally set the `file` / `ce-partition` 
property to specify the name of the file to upload.
-
-      If you do not set the property in the header, the Kamelet uses the 
exchange ID for the file name.
+    description: Upload data to an Amazon S3 Bucket.
     required:
       - bucketNameOrArn
       - region

Reply via email to