This is an automated email from the ASF dual-hosted git repository.

orpiske pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel.git


The following commit(s) were added to refs/heads/main by this push:
     new d7c4fb51097 (chores) docs: minor grammar fixes for documents updated 
in 4.9
d7c4fb51097 is described below

commit d7c4fb5109745ff0365bc318a472808d705ca603
Author: Otavio Rodolfo Piske <angusyo...@gmail.com>
AuthorDate: Mon Dec 2 10:45:01 2024 +0100

    (chores) docs: minor grammar fixes for documents updated in 4.9
---
 .../src/main/docs/azure-key-vault-component.adoc   |  19 ++--
 .../src/main/docs/opentelemetry.adoc               |   2 +-
 .../docs/modules/eips/pages/aggregate-eip.adoc     |   6 +-
 .../modules/ROOT/pages/camel-jbang-kubernetes.adoc | 104 ++++++++++-----------
 docs/user-manual/modules/ROOT/pages/security.adoc  |  58 ++++++------
 5 files changed, 96 insertions(+), 93 deletions(-)

diff --git 
a/components/camel-azure/camel-azure-key-vault/src/main/docs/azure-key-vault-component.adoc
 
b/components/camel-azure/camel-azure-key-vault/src/main/docs/azure-key-vault-component.adoc
index 283267dde0e..b3669fac12e 100644
--- 
a/components/camel-azure/camel-azure-key-vault/src/main/docs/azure-key-vault-component.adoc
+++ 
b/components/camel-azure/camel-azure-key-vault/src/main/docs/azure-key-vault-component.adoc
@@ -277,13 +277,13 @@ The only requirement is adding the camel-azure-key-vault 
jar to your Camel appli
 
 === Automatic Camel context reloading on Secret Refresh - Required 
Infrastructure's creation
 
-First of all we need to create an application
+First, we need to create an application
 
 ```
 az ad app create --display-name test-app-key-vault
 ```
 
-Then we need to obtain credentials
+Then we need to obtain the credentials
 
 ```
 az ad app credential reset --id <appId> --append --display-name 'Description: 
Key Vault app client' --end-date '2024-12-31'
@@ -320,7 +320,9 @@ At this point we need to add a role to the application with 
role assignment
 az role assignment create --assignee <appId> --role "Key Vault Administrator" 
--scope 
/subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/providers/Microsoft.KeyVault/vaults/<vaultName>
 ```
 
-Last step is to create policy on what can be or cannot be done with the 
application. In this case we just want to read the secret value. So This should 
be enough.
+The last step is to create a policy on what can be or cannot be done with the 
application.
+In this case, we just want to read the secret value.
+So This should be enough.
 
 ```
 az keyvault set-policy --name <vaultName> --spn <appId> --secret-permissions 
get
@@ -401,13 +403,14 @@ Select the Key Vault just created. In the menu select 
"Events".
 
 Then Select the Event Hub icon.
 
-In the page that will open, define a name for the event subscription for 
example "keyvault-to-eh".
+In the page that will open, define a name for the event subscription, for 
example, "keyvault-to-eh".
 
-In the System topic name field add "keyvault-to-eh-topic" for example.
+In the System topic name field add `keyvault-to-eh-topic` for example.
 
 In the "Filter to Event Types" leave the default value of 9.
 
-In the configure endpoint section for Eventhub, in the Event Hub namespace 
section you should notice the namespace you've created through the AZ CLI, 
select that and in the Event Hub dropdown menu select the Event Hub you've 
created through the AZ CLI. Press confirm selection. 
+In the configure endpoint section for Eventhub, in the Event Hub namespace 
section you should notice the namespace you've created through the AZ CLI.
+Select that and in the Event Hub dropdown menu select the Event Hub you've 
created through the AZ CLI. Press confirm selection.
 
 Leave everything as it is and press "Create".
 
@@ -473,14 +476,14 @@ include::spring-boot:partial$starter.adoc[]
 
 Azure Key Vault Spring Boot component starter offers the ability to early 
resolve properties, so the end user could resolve properties directly in the 
application.properties before both Spring Boot runtime and Camel context will 
start.
 
-This could be accomplished in the following way. You should specified this 
property in your application.properties file:
+This could be done in the following way. You should specify this property in 
your `application.properties` file:
 
 [source,bash]
 ----
 camel.component.azure-key-vault.early-resolve-properties=true=true
 ----
 
-This will enable the feature so you'll be able to resolved properties, in your 
application.properties file, like:
+This will enable the feature, so you'll be able to resolve properties, in your 
application.properties file, like:
 
 [source,bash]
 ----
diff --git a/components/camel-opentelemetry/src/main/docs/opentelemetry.adoc 
b/components/camel-opentelemetry/src/main/docs/opentelemetry.adoc
index a0349b90559..45f0e134901 100644
--- a/components/camel-opentelemetry/src/main/docs/opentelemetry.adoc
+++ b/components/camel-opentelemetry/src/main/docs/opentelemetry.adoc
@@ -122,7 +122,7 @@ Set the property `management.tracing.sampling.probability` 
to `1.0` if you want
 === SpanExporters
 
 You'll probably want to configure at least one 
https://opentelemetry.io/docs/languages/java/sdk/#spanexporter[SpanExporter]
-as they allow you to export your traces to various backends (e.g Zipkin and 
Jaeger) or log them. For example, to export your traces to Jaeger using OTLP 
via gRPC,
+as they allow you to export your traces to various backends (e.g., Zipkin and 
Jaeger), or log them. For example, to export your traces to Jaeger using OTLP 
via gRPC,
 add `io.opentelemetry:opentelemetry-exporter-otlp` as a dependency to your 
project. To configure it, you can
 use the `management.otlp.tracing` properties or register a new `SpanExporter` 
bean yourself:
 
diff --git 
a/core/camel-core-engine/src/main/docs/modules/eips/pages/aggregate-eip.adoc 
b/core/camel-core-engine/src/main/docs/modules/eips/pages/aggregate-eip.adoc
index 548f0b7ca57..cac9dba60d4 100644
--- a/core/camel-core-engine/src/main/docs/modules/eips/pages/aggregate-eip.adoc
+++ b/core/camel-core-engine/src/main/docs/modules/eips/pages/aggregate-eip.adoc
@@ -698,10 +698,10 @@ The first parameter is the `List` of names, and the 
second parameter is the inco
 === Aggregating after large split
 
 If you use the xref:split-eip.adoc[Split] EIP before this aggregator then 
beware that if you
-use a completion condition, such as `completionSize(1)` then this can lead to 
the current thread
-being over utilized and its thread-stack becomes very large, and the JVM can 
throw `StackOverflowException`.
+use a completion condition, such as `completionSize(1)`, then this can lead to 
the current thread
+being over utilized and its thread-stack becoming very large. This can cause 
the JVM to throw `StackOverflowException`.
 
-The reason is that same thread is both doing the large split, the aggregation, 
and also the completion
+The reason is that the same thread is both doing the large split, the 
aggregation, and also the completion
 of the aggregator all in the same thread. This can lead to deep thread-stacks. 
To avoid this,
 you can ensure the aggregator uses a different thread to process the 
completion routing, by enabling `parallelProcessing(true)`.
 
diff --git a/docs/user-manual/modules/ROOT/pages/camel-jbang-kubernetes.adoc 
b/docs/user-manual/modules/ROOT/pages/camel-jbang-kubernetes.adoc
index f4346655497..450a99950b0 100644
--- a/docs/user-manual/modules/ROOT/pages/camel-jbang-kubernetes.adoc
+++ b/docs/user-manual/modules/ROOT/pages/camel-jbang-kubernetes.adoc
@@ -54,20 +54,20 @@ The project export generates a proper Maven/Gradle project 
following one of the
 In case you export the project with the Kubernetes plugin the exported project 
holds all information (e.g. sources, properties, dependencies, etc.) and is 
ready to build, push and deploy the application to Kubernetes, too.
 The export generates a Kubernetes manifest (kubernetes.yml) that holds all 
resources (e.g. Deployment, Service, ConfigMap) required to run the application 
on Kubernetes.
 
-You can create a project export with following command.
+You can create a project export with the following command.
 
 [source,bash]
 ----
 camel kubernetes export route.yaml --dir some/path/to/project
 ----
 
-The command receives one or more source files (e.g. Camel routes) and performs 
the export.
-As a result you will find the Maven/Gradle project sources generated into the 
given project path.
+The command receives one or more source files (e.g. Camel routes), and 
performs the export.
+As a result, you will find the Maven/Gradle project sources generated into the 
given project path.
 
 The default runtime of the project is Quarkus.
 You can adjust the runtime with an additional command option 
`--runtime=quarkus`.
 
-If you want to run this application on Kubernetes you need to build the 
container image, push it to a registry and deploy the application to Kubernetes.
+If you want to run this application on Kubernetes, you need to build the 
container image, push it to a registry and deploy the application to Kubernetes.
 
 TIP: The Camel JBang Kubernetes plugin provides a `run` command that combines 
these steps (export, container image build, push, deploy) into a single command.
 
@@ -78,11 +78,11 @@ You can now navigate to the generated project folder and 
build the project artif
 ./mvnw package -Dquarkus.container-image.build=true
 ----
 
-According to the runtime type (e.g. quarkus) defined for the export this 
builds and creates a Quarkus application artifact JAR in the Maven build output 
folder (e.g. `target/route-1.0-SNAPSHOT.jar`).
+According to the runtime type (e.g., quarkus) defined for the export this 
builds and creates a Quarkus application artifact JAR in the Maven build output 
folder (e.g. `target/route-1.0-SNAPSHOT.jar`).
 
 The option `-Dquarkus.container-image.build=true` also builds a container 
image that is ready for deployment to Kubernetes.
-More precisely the exported project uses the very same tooling and options as 
an arbitrary Quarkus/SpringBoot application would do.
-This means you can easily customize the container image and all settings 
provided by the runtime provider (e.g. Quarkus or SpringBoot) after the export.
+More precisely, the exported project uses the very same tooling and options as 
an arbitrary Quarkus/SpringBoot application would do.
+This means you can easily customize the container image and all settings 
provided by the runtime provider (e.g., Quarkus or SpringBoot) after the export.
 
 The Kubernetes deployment resources are automatically generated with the 
export, too.
 
@@ -137,7 +137,7 @@ The Camel JBang Kubernetes export command provides several 
options to customize
 |A Service that the integration should bind to, specified as 
[[apigroup/]version:]kind:[namespace/]name.
 
 |--source
-|Add source file to your integration, this is added to the list of files 
listed as arguments of the command.
+|Add the source file to your integration, this is added to the list of files 
listed as arguments of the command.
 
 |--annotation
 |Add an annotation to the integration. Use name values pairs like 
"--annotation my.company=hello".
@@ -177,10 +177,10 @@ The Kubernetes manifest (kubernetes.yml) describes all 
resources to successfully
 The manifest usually holds the deployment, a service definition, config maps 
and much more.
 
 You can use several options on the `export` command to customize this manifest 
with the traits.
-The trait concept was born out of Camel K and the Camel K operator uses the 
traits to configure the Kubernetes resources that are managed by an integration.
+The trait concept was born out of Camel K, and the Camel K operator uses the 
traits to configure the Kubernetes resources that are managed by an integration.
 You can use the same options to also customize the Kubernetes manifest that is 
generated as part of the project export.
 
-The configuration of the traits are used by the given order:
+The configuration of the traits is used by the given order:
 
 1. Use the `--trait` command options values
 2. Any annotation starting with the prefix `trait.camel.apache.org/*`
@@ -189,9 +189,9 @@ The configuration of the traits are used by the given order:
 
 === Container trait options
 
-The container specification is part of the Kubernetes Deployment resource and 
describes the application container image, exposed ports and health probes for 
example.
+The container specification is part of the Kubernetes Deployment resource and 
describes the application container image, exposed ports and health probes, for 
example.
 
-The container trait is able to customize the container specification with 
following options:
+The container trait is able to customize the container specification with the 
following options:
 
 [cols="2m,1m,5a"]
 |===
@@ -417,7 +417,7 @@ spec:
 === Service trait options
 
 The Service trait enhances the Kubernetes manifest with a Service resource so 
that the application can be accessed by other components in the same namespace.
-The service resource exposes the application with a protocol (e.g. TCP/IP) on 
a given port and uses either `ClusterIP`, `NodePort` or `LoadBalancer` type.
+The service resource exposes the application with a protocol (e.g., TCP/IP) on 
a given port and uses either `ClusterIP`, `NodePort` or `LoadBalancer` type.
 
 The Camel JBang plugin automatically inspects the Camel routes for exposed 
Http services and adds the service resource when applicable.
 This means when one of the Camel routes exposes a Http service (for instance 
by using the `platform-http` component) the Kubernetes manifest also creates a 
Kubernetes Service resource besides the arbitrary Deployment.
@@ -446,10 +446,10 @@ You can customize the generated Kubernetes service 
resource with trait options:
 
 https://knative.dev/docs/serving/[Knative serving] defines a set of resources 
on Kubernetes to handle Serverless workloads with automatic scaling and 
scale-to-zero functionality.
 
-When Knative serving is available on the target Kubernetes cluster you may 
want to use the Knative service resource instead of an arbitrary Kubernetes 
service resource.
+When Knative serving is available on the target Kubernetes cluster, you may 
want to use the Knative service resource instead of an arbitrary Kubernetes 
service resource.
 The Knative service trait will create such a resource as part of the 
Kubernetes manifest.
 
-NOTE: You need to enable the Knative service trait with `--trait 
knative-service.enabled=true` option. Otherwise the Camel JBang export will 
always create an arbitrary Kubernetes service resource.
+NOTE: You need to enable the Knative service trait with `--trait 
knative-service.enabled=true` option. Otherwise, the Camel JBang export will 
always create an arbitrary Kubernetes service resource.
 
 The trait offers following options for customization:
 
@@ -464,18 +464,18 @@ The trait offers following options for customization:
 | knative-service.annotations
 | map[string]string
 | The annotations added to route.
-This can be used to set knative service specific annotations
+This can be used to set knative service-specific annotations
 CLI usage example: -t 
"knative-service.annotations.'haproxy.router.openshift.io/balance'=true"
 
 | knative-service.class
 | string
-| Configures the Knative autoscaling class property (e.g. to set 
`hpa.autoscaling.knative.dev` or `kpa.autoscaling.knative.dev` autoscaling).
+| Configures the Knative autoscaling class property (e.g., to set 
`hpa.autoscaling.knative.dev` or `kpa.autoscaling.knative.dev` autoscaling).
 
 Refer to the Knative documentation for more information.
 
 | knative-service.autoscaling-metric
 | string
-| Configures the Knative autoscaling metric property (e.g. to set 
`concurrency` based or `cpu` based autoscaling).
+| Configures the Knative autoscaling metric property (e.g., to set 
`concurrency` based or `cpu` based autoscaling).
 
 Refer to the Knative documentation for more information.
 
@@ -532,7 +532,7 @@ The export command assists you in configuring both the 
Knative component and the
 
 You can configure the Knative component with the Knative trait.
 
-The trait offers following options for customization:
+The trait offers the following options for customization:
 
 [cols="2m,1m,5a"]
 |===
@@ -554,7 +554,7 @@ Refer to the Knative documentation for more information.
 
 | knative.channel-sources
 | []string
-| List of channels used as source of camel routes. Can contain simple channel 
names or full Camel URIs.
+| List of channels used as the source of camel routes. Can contain simple 
channel names or full Camel URIs.
 
 | knative.endpoint-sinks
 | []string
@@ -590,7 +590,7 @@ Refer to the Knative documentation for more information.
 === Knative trigger
 
 The concept of a Knative trigger allows you to consume events from the 
https://knative.dev/docs/eventing/[Knative eventing] broker.
-In case your Camel route uses the Knative component as a consumer you may need 
to create a trigger in Kubernetes in order
+In case your Camel route uses the Knative component as a consumer you may need 
to create a trigger in Kubernetes
 to connect your Camel application with the Knative broker.
 
 The Camel JBang Kubernetes plugin is able to automatically create this trigger 
for you.
@@ -608,14 +608,14 @@ to run the Camel application on Kubernetes.
 ----
 
 The route consumes Knative events of type `camel.evt.type`.
-If you export this route with the Camel JBang Kubernetes plugin you will see a 
Knative trigger being generated as part of the Kubernetes manifest 
(kubernetes.yml).
+If you export this route with the Camel JBang Kubernetes plugin, you will see 
a Knative trigger being generated as part of the Kubernetes manifest 
(kubernetes.yml).
 
 [source,bash]
 ----
 camel kubernetes export knative-route.yaml
 ----
 
-The generated export project can be deployed to Kubernetes and as part of the 
deployment the trigger is automatically created so the application can start 
consuming events.
+The generated export project can be deployed to Kubernetes, and as part of the 
deployment, the trigger is automatically created so the application can start 
consuming events.
 
 The generated trigger looks as follows:
 
@@ -670,7 +670,7 @@ Now you can just deploy the application using the 
Kubernetes manifest and see th
 === Knative channel subscription
 
 Knative channels represent another form of producing and consuming events from 
the Knative broker.
-Instead of using a trigger you can create a subscription for a Knative channel 
to consume events.
+Instead of using a trigger, you can create a subscription for a Knative 
channel to consume events.
 
 The Camel route that connects to a Knative channel in order to receive events 
looks like this:
 
@@ -691,7 +691,7 @@ You just need to export the Camel route as usual.
 camel kubernetes export knative-route.yaml
 ----
 
-The code inspection recognizes the Knative component that references the 
Knative channel and the subscription automatically becomes part of the exported 
Kubernetes manifest.
+The code inspection recognizes the Knative component that references the 
Knative channel, and the subscription automatically becomes part of the 
exported Kubernetes manifest.
 
 Here is an example subscription that has been generated during the export:
 
@@ -714,7 +714,7 @@ spec:
     uri: /channels/my-channel
 ----
 
-The subscription connects the Camel application with the channel so each event 
on the channel is sent to the Kubernetes service resource that also has been 
created as part of the Kubernetes manifest.
+The subscription connects the Camel application with the channel, so each 
event on the channel is sent to the Kubernetes service resource that also has 
been created as part of the Kubernetes manifest.
 
 The Camel Knative component uses a service resource configuration internally 
to create the proper Http service.
 You can review the Knative service resource configuration that makes Camel 
configure the Knative component.
@@ -738,8 +738,8 @@ Here is an example of the generated `knative.json` file:
 }
 ----
 
-Assuming that you have Knative eventing installed on your cluster and that you 
have setup the Knative channel `my-channel` you can start consuming events 
right away.
-The deployment of the exported project uses the Kubernetes manifest to create 
all required resources including the Knative subscription.
+Assuming that you have Knative eventing installed on your cluster and that you 
have set up the Knative channel `my-channel` you can start consuming events 
right away.
+The deployment of the exported project uses the Kubernetes manifest to create 
all required resources, including the Knative subscription.
 
 === Knative sink binding
 
@@ -763,13 +763,13 @@ The following route produces events on a Knative broker:
 ----
 
 The route produces events of type `camel.evt.type` and pushes the events to 
the broker named `my-broker`.
-At this point the actual Knative broker URL is unknown.
+At this point, the actual Knative broker URL is unknown.
 The sink binding is going to resolve the URL and inject its value at 
deployment time using the `K_SINK` environment variable.
 
 The Camel JBang Kubernetes plugin export automatically inspects such a route 
and automatically creates the sink binding resource for us.
 The sink binding is part of the exported Kubernetes manifest and is created on 
the cluster as part of the deployment.
 
-A sink binding resource that is created by the export command looks like 
follows:
+A sink binding resource created by the export command looks like follows:
 
 [source,bash]
 ----
@@ -796,7 +796,7 @@ spec:
     name: knative-route
 ----
 
-In addition to creating the sink binding the Camel JBang plugin also takes 
care of configuring the Knative Camel component.
+In addition to creating the sink binding, the Camel JBang plugin also takes 
care of configuring the Knative Camel component.
 The Knative component uses a configuration file that you can find in 
`src/main/resources/knative.json`.
 As you can see the configuration uses the `K_SINK` injected property 
placeholder as a broker URL.
 
@@ -838,8 +838,8 @@ The mount trait provides the following configuration 
options:
 | mount.configs
 | []string
 | A list of configuration pointing to configmap/secret.
-The configuration are expected to be UTF-8 resources as they are processed by 
runtime Camel Context and tried to be parsed as property files.
-They are also made available on the classpath in order to ease their usage 
directly from the Route.
+The configurations are expected to be UTF-8 resources as they are processed by 
runtime Camel Context and tried to be parsed as property files.
+They are also made available on the classpath to ease their usage directly 
from the Route.
 Syntax: [configmap\|secret]:name[/key], where name represents the resource 
name and key optionally represents the resource key to be filtered
 
 | mount.resources
@@ -905,14 +905,14 @@ spec:
           persistentVolumeClaim:
             claimName: my-pvc
 ----
-<1> The config map my-data mounted into the container with default mount path 
for configurations
+<1> The config map `my-data` mounted into the container with default mount 
path for configurations
 <2> The volume mounted into the container with given path
 <3> The config map reference as volume spec
-<4> The persistent volume claim my-pvc
+<4> The persistent volume claim `my-pvc`
 
 === ConfigMaps, volumes and secrets
 
-In the previous section we have seen how to mount volumes, configs, resources 
into the container.
+In the previous section, we have seen how to mount volumes, configs, and 
resources into the container.
 
 The Kubernetes export command provides some shortcut options for adding 
configmaps and secrets as volume mounts.
 The syntax is as follows:
@@ -932,7 +932,7 @@ The options expect the following syntax:
 | Add a runtime configuration from a ConfigMap or a Secret (syntax: 
[configmap\|secret]:name[/key], where name represents the configmap or secret 
name and key optionally represents the configmap or secret key to be filtered).
 
 | resource
-| Add a runtime resource from a Configmap or a Secret (syntax: 
[configmap\|secret]:name[/key][@path], where name represents the configmap or 
secret name, key optionally represents the configmap or secret key to be 
filtered and path represents the destination path).
+| Add a runtime resource from a Configmap or a Secret (syntax: 
[configmap\|secret]:name[/key][@path], where name represents the configmap or 
secret name, key optionally represents the configmap or secret key to be 
filtered and the path represents the destination path).
 
 | volume
 | Mount a volume into the integration container, for instance "--volume 
pvcname:/container/path".
@@ -993,11 +993,11 @@ spec:
 <5> The config map resource volume
 <6> The persistent volume claim volume
 
-The trait volume mounts follow some best practices in specifying the mount 
paths in the container. Configurations and resources, secrets and configmaps do 
use different paths in the container. The Camel application is automatically 
configured to read these paths as resource folders, so you can use the mounted 
data in the Camel routes via classpath reference for instance.
+The trait volume mounts follow some best practices in specifying the mount 
paths in the container. Configurations and resources, secrets, and configmaps 
do use different paths in the container. The Camel application is automatically 
configured to read these paths as resource folders, so you can use the mounted 
data in the Camel routes via classpath reference, for instance.
 
 === Ingress trait options
 
-The ingress trait enhance the Kubernetes manifest with an Ingress resource to 
expose the application to the outside world. This requires the presence in the 
Kubernetes manifest of a Service Resource.
+The ingress trait enhances the Kubernetes manifest with an Ingress resource to 
expose the application to the outside world. This requires the presence in the 
Kubernetes manifest of a Service Resource.
 
 The ingress trait provides the following configuration options:
 
@@ -1011,7 +1011,7 @@ The ingress trait provides the following configuration 
options:
 
 | ingress.annotations
 | map[string]string
-| The annotations added to the ingress. This can be used to set controller 
specific annotations, e.g., when using the 
https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md[NGINX
 Ingress controller].
+| The annotations added to the ingress. This can be used to set 
controller-specific annotations, e.g., when using the 
https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md[NGINX
 Ingress controller].
 
 | ingress.host
 | string
@@ -1082,7 +1082,7 @@ spec:
 
 === Route trait options
 
-The Route trait enhance the Kubernetes manifest with a Route resource to 
expose the application to the outside world. This requires the presence in the 
Kubernetes manifest of a Service Resource.
+The Route trait enhances the Kubernetes manifest with a Route resource to 
expose the application to the outside world. This requires the presence in the 
Kubernetes manifest of a Service Resource.
 
 NOTE: You need to enable the OpenShift profile trait with 
`--trait-profile=openshift` option.
 
@@ -1122,7 +1122,7 @@ The Route trait provides the following configuration 
options:
 
 | route.tls-destination-ca-certificate
 | string
-| The destination CA contents or file (`file:absolute.path`). The destination 
CA certificate provides the contents of the CA certificate of the final 
destination. When using reencrypt termination this file should be provided in 
order to have routers use it for health checks on the secure connection. If 
this field is not specified, the router may provide its own destination CA and 
perform hostname validation using the short service name 
(service.namespace.svc), which allows infrastructure [...]
+| The destination CA contents or file (`file:absolute.path`). The destination 
CA certificate provides the contents of the CA certificate of the final 
destination. When using reencrypt termination, this file should be provided to 
have routers use it for health checks on the secure connection. If this field 
is not specified, the router may provide its own destination CA and perform 
hostname validation using the short service name (service.namespace.svc), which 
allows infrastructure generat [...]
 
 
 | route.tls-insecure-edge-termination-policy
@@ -1251,17 +1251,17 @@ spec:
 By default, the Kubernetes manifest is suited for plain Kubernetes platforms.
 In case you are targeting OpenShift as a platform you may want to leverage 
special resources such as Route, ImageStream or BuildConfig.
 
-You can set the `cluster-type=openshift` option on the export command in order 
to tell the Kubernetes plugin to create a Kubernetes manifest specifically 
suited for OpenShift.
+You can set the `cluster-type=openshift` option on the export command to tell 
the Kubernetes plugin to create a Kubernetes manifest specifically suited for 
OpenShift.
 
 Also, the default image builder is S2I for OpenShift clusters.
-This means by setting the cluster type you will automatically switch from 
default Jib to S2I.
+This means by setting the cluster type, you will automatically switch from 
default Jib to S2I.
 Of course, you can tell the plugin to use Jib with `--image-builder=jib` 
option.
 The image may then get pushed to an external registry (docker.io or quay.io) 
so OpenShift can pull as part of the deployment in the cluster.
 
-TIP: When using S2I you may need to explicitly set the `--image-group` option 
to the project/namespace name in the OpenShift cluster.
+TIP: When using S2I, you may need to explicitly set the `--image-group` option 
to the project/namespace name in the OpenShift cluster.
 This is because S2I will push the container image to an image repository that 
uses the OpenShift project/namespace name as part of the image coordinates in 
the registry: `image-registry.openshift-image-registry.svc:5000/<project 
name>/<name>:<tag>`
 
-When using S2I as an image build option the Kubernetes manifest also contains 
an ImageStream and BuildConfig resource.
+When using S2I as an image build option, the Kubernetes manifest also contains 
an ImageStream and BuildConfig resource.
 Both resources are automatically added/removed when creating/deleting the 
deployment with the Camel Kubernetes JBang plugin.
 
 == Kubernetes run
@@ -1274,7 +1274,7 @@ The command performs a project export to a temporary 
folder, builds the project
 camel kubernetes run route.yaml --image-registry=kind
 ----
 
-When connecting to a local Kubernetes cluster you may need to specify the 
image registry where the application container image gets pushed to.
+When connecting to a local Kubernetes cluster, you may need to specify the 
image registry where the application container image gets pushed to.
 The run command is able to automatically configure the local registry when 
using predefined names such as `kind` or `minikube`.
 
 Use the `--image-group` or the `--image` option to customize the container 
image.
@@ -1298,10 +1298,10 @@ The `--image` option forces the container image group, 
name, version as well as
 The `run` command provides the same options to customize the Kubernetes 
manifest as the `export` command.
 You may want to add environment variables, mount secrets and configmaps, 
adjust the exposed service and many other things with trait options as 
described in the export command section.
 
-=== Auto reload with --dev option
+=== Auto reload with `--dev` option
 
 The `--dev` option runs the application on Kubernetes and automatically adds a 
file watcher to listen for changes on the Camel route source files.
-In case the sources get changed the process will automatically perform a 
rebuild and redeployment.
+In case the sources get changed, the process will automatically perform a 
rebuild and redeployment.
 The command constantly prints the logs to the output, so you may see the 
changes directly being applied to the Kubernetes deployment.
 
 [source,bash]
@@ -1312,8 +1312,8 @@ camel kubernetes run route.yaml --image-registry=kind 
--dev
 You need to terminate the process to stop the dev mode.
 This automatically removes the Kubernetes deployment from the cluster on 
shutdown.
 
-NOTE: On MacOS hosts the file watch mechanism is known to be much slower and 
less stable compared to using the `--dev` option on other operating systems 
like Linux.
-This is due to limited native file operations on MacOS for Java processes.
+NOTE: On macOS hosts, the file watch mechanism is known to be much slower and 
less stable compared to using the `--dev` option on other operating systems 
like Linux.
+This is due to limited native file operations on macOS for Java processes.
 
 == Show logs
 
@@ -1346,7 +1346,7 @@ The delete operation will remove all resources defined in 
the Kubernetes manifes
 To run a local Kubernetes cluster with Minikube for development purposes.
 Here are some tips from users that have been using this.
 
-The following steps has been known to be working (Camel 4.9):
+The following steps have been known to be working (Camel 4.9):
 
 1. `minikube start --addons registry --driver=docker`
 2. `eval $(minikube -p minikube docker-env)`
diff --git a/docs/user-manual/modules/ROOT/pages/security.adoc 
b/docs/user-manual/modules/ROOT/pages/security.adoc
index e50b3f1b668..febc4f96bce 100644
--- a/docs/user-manual/modules/ROOT/pages/security.adoc
+++ b/docs/user-manual/modules/ROOT/pages/security.adoc
@@ -1,7 +1,7 @@
 = Security
 
-Camel offers several forms & levels of security capabilities that can be
-utilized on Camel routes. These various forms of security may be used in
+Camel offers several forms and levels of security capabilities that can be
+used on Camel routes. These various forms of security may be used in
 conjunction with each other or separately.
 
 The broad categories offered are:
@@ -60,11 +60,11 @@ Camel offers the 
xref:components::properties-component.adoc[Properties] componen
 externalize configuration values to properties files. Those values could
 contain sensitive information such as usernames and passwords.
 
-Those values can be encrypted and automatic decrypted by Camel using:
+Those values can be encrypted and automatically decrypted by Camel using:
 
 * xref:components:others:jasypt.adoc[Jasypt]
 
-Camel also support accessing the secured configuration from an external vault 
systems.
+Camel also supports accessing the secured configuration from an external vault 
systems.
 
 === Configuration Security using Vaults
 
@@ -77,7 +77,7 @@ The following _Vaults_ are supported by Camel:
 
 ==== Using AWS Vault
 
-To use AWS Secrets Manager you need to provide _accessKey_, _secretKey_ and 
the _region_.
+To use AWS Secrets Manager, you need to provide _accessKey_, _secretKey_ and 
the _region_.
 This can be done using environmental variables before starting the application:
 
 [source,bash]
@@ -156,9 +156,9 @@ You could specify a default value in case the secret is not 
present on AWS Secre
 </camelContext>
 ----
 
-In this case if the secret doesn't exist, the property will fallback to 
"default" as value.
+In this case, if the secret doesn't exist, the property will fallback to 
"default" as value.
 
-Also, you are able to get particular field of the secret, if you have for 
example a secret named database of this form:
+Also, you are able to get a particular field of the secret, if you have, for 
example, a secret named database of this form:
 
 [source,json]
 ----
@@ -198,15 +198,15 @@ You could specify a default value in case the particular 
field of secret is not
 </camelContext>
 ----
 
-In this case if the secret doesn't exist or the secret exists, but the 
username field is not part of the secret, the property will fallback to "admin" 
as value.
+In this case, if the secret doesn't exist or the secret exists, but the 
username field is not part of the secret, the property will fall back to 
"admin" as value.
 
-NOTE: For the moment we are not considering the rotation function, if any will 
be applied, but it is in the work to be done.
+NOTE: For the moment we are not considering the rotation function if any are 
applied, but it is in the work to be done.
 
 The only requirement is adding `camel-aws-secrets-manager` JAR to your Camel 
application.
 
 ==== Using GCP Vault
 
-To use GCP Secret Manager you need to provide _serviceAccountKey_ file and GCP 
_projectId_.
+To use GCP Secret Manager, you need to provide _serviceAccountKey_ file and 
GCP _projectId_.
 This can be done using environmental variables before starting the application:
 
 [source,bash]
@@ -265,9 +265,9 @@ You could specify a default value in case the secret is not 
present on GCP Secre
 </camelContext>
 ----
 
-In this case if the secret doesn't exist, the property will fallback to 
"default" as value.
+In this case, if the secret doesn't exist, the property will fallback to 
"default" as value.
 
-Also, you are able to get particular field of the secret, if you have for 
example a secret named database of this form:
+Also, you are able to get a particular field of the secret, if you have, for 
example, a secret named database of this form:
 
 [source,json]
 ----
@@ -307,9 +307,9 @@ You could specify a default value in case the particular 
field of secret is not
 </camelContext>
 ----
 
-In this case if the secret doesn't exist or the secret exists, but the 
username field is not part of the secret, the property will fallback to "admin" 
as value.
+In this case, if the secret doesn't exist or the secret exists, but the 
username field is not part of the secret, the property will fallback to "admin" 
as value.
 
-NOTE: For the moment we are not considering the rotation function, if any will 
be applied, but it is in the work to be done.
+NOTE: For the moment we are not considering the rotation function if any are 
applied, but it is in the work to be done.
 
 There are only two requirements: 
 - Adding `camel-google-secret-manager` JAR to your Camel application.
@@ -317,7 +317,7 @@ There are only two requirements:
 
 ==== Using Azure Key Vault
 
-To use this function you'll need to provide credentials to Azure Key Vault 
Service as environment variables:
+To use this function, you'll need to provide credentials to Azure Key Vault 
Service as environment variables:
 
 [source,bash]
 ----
@@ -353,7 +353,7 @@ camel.vault.azure.azureIdentityEnabled = true
 camel.vault.azure.vaultName = vaultName
 ----
 
-At this point you'll be able to reference a property in the following way:
+At this point, you'll be able to reference a property in the following way:
 
 [source,xml]
 ----
@@ -379,9 +379,9 @@ You could specify a default value in case the secret is not 
present on Azure Key
 </camelContext>
 ----
 
-In this case if the secret doesn't exist, the property will fallback to 
"default" as value.
+In this case, if the secret doesn't exist, the property will fallback to 
"default" as value.
 
-Also you are able to get particular field of the secret, if you have for 
example a secret named database of this form:
+Also you are able to get a particular field of the secret if you have, for 
example, a secret named database of this form:
 
 [source,bash]
 ----
@@ -421,9 +421,9 @@ You could specify a default value in case the particular 
field of secret is not
 </camelContext>
 ----
 
-In this case if the secret doesn't exist or the secret exists, but the 
username field is not part of the secret, the property will fallback to "admin" 
as value.
+In this case, if the secret doesn't exist or the secret exists, but the 
username field is not part of the secret, the property will fallback to "admin" 
as value.
 
-For the moment we are not considering the rotation function, if any will be 
applied, but it is in the work to be done.
+For the moment we are not considering the rotation function if any are 
applied, but it is in the work to be done.
 
 The only requirement is adding the camel-azure-key-vault jar to your Camel 
application.
 
@@ -491,7 +491,7 @@ Also, you are able to get a particular field of the secret, 
if you have, for exa
 }
 ----
 
-You're able to do get single secret value in your route, in the 'secret' 
engine, like for example:
+You're able to do get single secret value in your route, in the 'secret' 
engine, like, for example:
 
 [source,xml]
 ----
@@ -579,7 +579,7 @@ camel.vault.aws.region = region
 
 Or by specifying accessKey/SecretKey and region, instead of using the default 
credentials provider chain.
 
-To enable the automatic refresh you'll need additional properties to set:
+To enable the automatic refresh, you'll need additional properties to set:
 
 [source,properties]
 ----
@@ -601,9 +601,9 @@ Another option is to use AWS EventBridge in conjunction 
with the AWS SQS service
 
 On the AWS side, the following resources need to be created:
 
-- an AWS Couldtrail trail
+- an AWS CloudTrail trail
 - an AWS SQS Queue
-- an Eventbridge rule of the following kind
+- an EventBridge rule of the following kind
 
 [source,json]
 ----
@@ -638,7 +638,7 @@ aws sqs set-queue-attributes --queue-url <queue_url> 
--attributes file://policy.
 
 where queue_url is the AWS SQS Queue URL of the just created Queue.
 
-Now you should be able to set up the configuration on the Camel side. To 
enable the SQS notification add the following properties:
+Now you should be able to set up the configuration on the Camel side. To 
enable the SQS notification, add the following properties:
 
 [source,properties]
 ----
@@ -694,12 +694,12 @@ Note that `camel.vault.gcp.secrets` is not mandatory: if 
not specified the task
 
 The `camel.vault.gcp.subscriptionName` is the subscription name created in 
relation to the Google PubSub topic associated with the tracked secrets.
 
-This mechanism while make use of the notification system related to Google 
Secret Manager: through this feature, every secret could be associated to one 
up to ten Google Pubsub Topics. These topics will receive 
-events related to life cycle of the secret.
+This mechanism makes use of the notification system related to Google Secret 
Manager: through this feature, every secret could be associated with one up to 
ten Google Pubsub Topics. These topics will receive
+events related to the life cycle of the secret.
 
 There are only two requirements: 
 - Adding `camel-google-secret-manager` JAR to your Camel application.
-- Give the service account used permissions to do operation at secret 
management level (for example accessing the secret payload, or being admin of 
secret manager service and also have permission over the Pubsub service)
+- Give the service account used permissions to do operation at secret 
management level (for example, accessing the secret payload, or being admin of 
secret manager service and also have permission over the Pubsub service)
 
 ==== Automatic Camel context reloading on Secret Refresh while using Azure Key 
Vault
 
@@ -741,7 +741,7 @@ camel.vault.azure.azureIdentityEnabled = true
 camel.vault.azure.vaultName = vaultName
 ----
 
-To enable the automatic refresh you'll need additional properties to set:
+To enable the automatic refresh, you'll need additional properties to set:
 
 [source,properties]
 ----


Reply via email to