This is an automated email from the ASF dual-hosted git repository.

davsclaus pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel.git


The following commit(s) were added to refs/heads/main by this push:
     new 3793981daa3 CAMEL-20410: documentation fixes for camel-azure-eventhubs 
(#13106)
3793981daa3 is described below

commit 3793981daa32567a25a2aa1628a986658f863171
Author: Otavio Rodolfo Piske <orpi...@users.noreply.github.com>
AuthorDate: Tue Feb 13 19:59:13 2024 +0100

    CAMEL-20410: documentation fixes for camel-azure-eventhubs (#13106)
    
    * CAMEL-20410: documentation fixes for camel-azure-eventhubs
    
    - Fixed grammar and typos
    - Fixed punctuation
    - Added and/or fixed links
    
    * CAMEL-20410: documentation fixes for camel-azure-files
    
    - Fixed grammar and typos
    - Fixed punctuation
    - Added and/or fixed links
---
 .../src/main/docs/azure-eventhubs-component.adoc   | 35 +++++++------
 .../src/main/docs/azure-files-component.adoc       | 58 +++++++++++-----------
 2 files changed, 49 insertions(+), 44 deletions(-)

diff --git 
a/components/camel-azure/camel-azure-eventhubs/src/main/docs/azure-eventhubs-component.adoc
 
b/components/camel-azure/camel-azure-eventhubs/src/main/docs/azure-eventhubs-component.adoc
index 4922aa632c7..921ad086fb6 100644
--- 
a/components/camel-azure/camel-azure-eventhubs/src/main/docs/azure-eventhubs-component.adoc
+++ 
b/components/camel-azure/camel-azure-eventhubs/src/main/docs/azure-eventhubs-component.adoc
@@ -18,7 +18,7 @@
 The Azure Event Hubs used to integrate 
https://azure.microsoft.com/en-us/services/event-hubs/[Azure Event Hubs] using 
https://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol[AMQP protocol].
 Azure EventHubs is a highly scalable publish-subscribe service that can ingest 
millions of events per second and stream them to multiple consumers.
 
-NOTE: Besides AMQP protocol support, Event Hubs as well supports Kafka and 
HTTPS protocols. Therefore, you can use as well 
xref:components::kafka-component.adoc[Camel Kafka] component to produce and 
consume to Azure Event Hubs. You can lean more 
https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs[here].
+NOTE: Besides AMQP protocol support, Event Hubs as well supports Kafka and 
HTTPS protocols. Therefore, you can also use the 
xref:components::kafka-component.adoc[Camel Kafka] component to produce and 
consume to Azure Event Hubs. You can lean more 
https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs[here].
 
 
 Prerequisites
@@ -67,37 +67,41 @@ include::partial$component-endpoint-options.adoc[]
 == Authentication Information
 
 You have three different Credential Types: AZURE_IDENTITY, TOKEN_CREDENTIAL 
and CONNECTION_STRING. You can also provide a client instance yourself.
-To use this component, you have 3 options in order to provide the required 
Azure authentication information:
+To use this component, you have three options to provide the required Azure 
authentication information:
+
+*CONNECTION_STRING*:
 
-CONNECTION_STRING:
 - Provide `sharedAccessName` and `sharedAccessKey` for your Azure Event Hubs 
account. The sharedAccessKey can
 be generated through your Event Hubs Azure portal.
 - Provide `connectionString` string, if you provide the connection string, you 
don't supply `namespace`, `eventHubName`, `sharedAccessKey` and 
`sharedAccessName`
 as these data already included in the `connectionString`, therefore is the 
simplest option to get started. Learn more 
https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-get-connection-string[here]
 on how to generate the connection string.
 
-TOKEN_CREDENTIAL:
-- Provide an implementation of `com.azure.core.credential.TokenCredential` 
into the Camel's Registry, e.g. using the 
`com.azure.identity.DefaultAzureCredentialBuilder().build();` API.
+*TOKEN_CREDENTIAL*:
+
+- Provide an implementation of `com.azure.core.credential.TokenCredential` 
into the Camel's Registry, e.g., using the 
`com.azure.identity.DefaultAzureCredentialBuilder().build();` API.
 See the documentation 
https://docs.microsoft.com/en-us/azure/active-directory/authentication/overview-authentication[here
 about Azure-AD authentication].
 
 AZURE_IDENTITY:
 - This will use `com.azure.identity.DefaultAzureCredentialBuilder().build();` 
instance. This will follow the Default Azure Credential Chain.
 See the documentation 
https://docs.microsoft.com/en-us/azure/active-directory/authentication/overview-authentication[here
 about Azure-AD authentication].
 
-Client instance:
+*Client instance*:
+
 - Provide a 
https://docs.microsoft.com/en-us/java/api/com.azure.messaging.eventhubs.eventhubproducerasyncclient?view=azure-java-stable[EventHubProducerAsyncClient]
 instance which can be
 provided into `producerAsyncClient`. However, this is *only possible for camel 
producer*, for the camel consumer, is not possible to inject the client due to 
some design constraint by the `EventProcessorClient`.
 
 == Checkpoint Store Information
-A checkpoint store stores and retrieves partition ownership information and 
checkpoint details for each partition in a given consumer group of an event hub 
instance. Users are not meant to implement an CheckpointStore.
+
+A checkpoint store stores and retrieves partition ownership information and 
checkpoint details for each partition in a given consumer group of an event hub 
instance. Users are not meant to implement a CheckpointStore.
 Users are expected to choose existing implementations of this interface, 
instantiate it, and pass it to the component through `checkpointStore` option.
 Users are not expected to use any of the methods on a checkpoint store, these 
are used internally by the client.
 
-Having said that, if the user does not pass any `CheckpointStore` 
implementation, the component will fallback to use 
https://docs.microsoft.com/en-us/javascript/api/@azure/eventhubs-checkpointstore-blob/blobcheckpointstore?view=azure-node-latest[`BlobCheckpointStore`]
 to store the checkpoint info in Azure Blob Storage account.
+Having said that, if the user does not pass any `CheckpointStore` 
implementation, the component will fall back to use 
https://docs.microsoft.com/en-us/javascript/api/@azure/eventhubs-checkpointstore-blob/blobcheckpointstore?view=azure-node-latest[`BlobCheckpointStore`]
 to store the checkpoint info in the Azure Blob Storage account.
 If you chose to use the default `BlobCheckpointStore`, you will need to supply 
the following options:
 
 - `blobAccountName`: It sets Azure account name to be used for authentication 
with azure blob services.
-- `blobAccessKey` : It sets access key for the associated azure account name 
to be used for authentication with azure blob services.
-- `blobContainerName` : It sets the blob container that shall be used by the 
BlobCheckpointStore to store the checkpoint offsets.
+- `blobAccessKey`: It sets the access key for the associated azure account 
name to be used for authentication with azure blob services.
+- `blobContainerName`: It sets the blob container that shall be used by the 
BlobCheckpointStore to store the checkpoint offsets.
 
 
 == Async Consumer and Producer
@@ -109,7 +113,7 @@ This allows camel route to consume and produce events 
asynchronously without blo
 
 == Usage
 
-For example in order consume event from EventHub, use the following snippet:
+For example, to consume event from EventHub, use the following snippet:
 
 [source,java]
 
--------------------------------------------------------------------------------
@@ -129,11 +133,12 @@ The same goes as well for the component's consumer, it 
will set the encoded data
 === Automatic detection of EventHubProducerAsyncClient client in registry
 
 The component is capable of detecting the presence of an 
EventHubProducerAsyncClient bean into the registry.
-If it's the only instance of that type it will be used as client and you won't 
have to define it as uri parameter, like the example above.
+If it's the only instance of that type, it will be used as the client, and you 
won't have to define it as uri parameter, like the example above.
 This may be really useful for smarter configuration of the endpoint.
 
 === Consumer Example
-The example below will unmarshal the events that was originally produced in 
JSON:
+
+The example below will unmarshal the events that were originally produced in 
JSON:
 
 [source,java]
 ----
@@ -156,7 +161,7 @@ from("direct:start")
 .to("azure-eventhubs:?connectionString=RAW({{connectionString}})"
 ----
 
-Also, the component supports as well *aggregation* of messages by sending 
events as *iterable* of either Exchanges/Messages or normal data (e.g: list of 
Strings). For example:
+Also, the component supports as well *aggregation* of messages by sending 
events as *iterable* of either Exchanges/Messages or normal data (e.g.: list of 
Strings). For example:
 
 [source,java]
 ----
@@ -189,7 +194,7 @@ from("direct:start")
 
 === Development Notes (Important)
 
-When developing on this component, you will need to obtain your Azure 
accessKey in order to run the integration tests. In addition to the mocked unit 
tests
+When developing on this component, you will need to obtain your Azure 
accessKey to run the integration tests. In addition to the mocked unit tests,
 you *will need to run the integration tests with every change you make or even 
client upgrade as the Azure client can break things even on minor versions 
upgrade.*
 To run the integration tests, on this component directory, run the following 
maven command:
 
diff --git 
a/components/camel-azure/camel-azure-files/src/main/docs/azure-files-component.adoc
 
b/components/camel-azure/camel-azure-files/src/main/docs/azure-files-component.adoc
index 0b26a1a30c1..837b8a0475a 100644
--- 
a/components/camel-azure/camel-azure-files/src/main/docs/azure-files-component.adoc
+++ 
b/components/camel-azure/camel-azure-files/src/main/docs/azure-files-component.adoc
@@ -18,13 +18,13 @@ This component provides access to Azure Files.
 
 [CAUTION]
 ====
-A preview component so anything can change in a next release
-or it could be even dropped. At the same time it is consolidated
+This is preview component, therefore, anything can change in future releases
+(features and behavior can be changed, modified or even dropped without 
notice). At the same time it is consolidated
 enough, sparingly documented, a few users reported it was working
 in their environment, and it is ready for wider feedback. 
 ====
 
-When consuming from remote files server make sure you read the section titled 
_Consuming Files_
+When consuming from remote files server, make sure you read the section titled 
_Consuming Files_
 further below for details related to consuming files.
 
 Maven users will need to add the following dependency to their `pom.xml`
@@ -51,10 +51,10 @@ is a relative path and does not include the share name. The 
relative path
 can contain nested folders, such as `inbox/spam`. It defaults to
 the share root directory.
 
-The `autoCreate` option is supported for the directory,
+The `autoCreate` option is supported for the directory;
 when consumer or producer starts, there's an additional operation
 performed to create the directory configured for the endpoint. The default
-value for `autoCreate` is `true`. On contrary, the share must exist, it
+value for `autoCreate` is `true`. On the contrary, the share must exist; it
 is not automatically created.
 
 If no *port* number is provided, Camel will provide default values
@@ -63,7 +63,7 @@ according to the protocol (https 443).
 You can append query options to the URI in the following format
 `?option=value&option2=value&...`.
 
-To use this component, you have multiple options in order to provide the 
required Azure authentication information:
+To use this component, you have multiple options to provide the required Azure 
authentication information:
 
 - Via Azure Identity, when specifying `credentialType=AZURE_IDENTITY` and 
providing required 
https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/identity/azure-identity#environment-variables[environment
 variables]. This enables service principal (e.g. app registration) 
authentication with secret/certificate as well as username password. 
 - Via shared storage account key, when specifying 
`credentialType=SHARED_ACCOUNT_KEY` and providing `sharedKey` for your Azure 
account, this is the simplest way to get started. The sharedKey can be 
generated through your Azure portal.
@@ -99,7 +99,7 @@ 
azure-files://camelazurefiles/samples/inbox/spam?sharedKey=FAKE502UyuBD...3Z%2BA
 == Paths
 
 The path separator is `/`. The absolute paths start with the path separator.
-The absolute paths do not include the share name and they are relative
+The absolute paths do not include the share name, and they are relative
 to the share root rather than to the endpoint starting directory.
  
 *NOTE:* At some places, namely logs of used libraries, OS-specific path 
separator
@@ -124,7 +124,7 @@ This component uses the Azure Java SDK libraries for the 
actual work.
 The remote consumer will by default leave the consumed
 files untouched on the remote cloud files server. You have to configure it
 explicitly if you want it to delete the files or move them to another
-location. For example you can use `delete=true` to delete the files, or
+location. For example, you can use `delete=true` to delete the files, or
 use `move=.done` to move the files into `.done` sub directory.
 
 In Camel, the `.`-prefixed folders are excluded from
@@ -150,20 +150,20 @@ performance targets, route processors, caching, resuming, 
etc.
 === Limitations
 
 The option *readLock* can be used to force Camel *not* to consume files
-that is currently in the progress of being written. However, this option
+that are currently in the progress of being written. However, this option
 is turned off by default, as it requires that the user has write access.
-See the options table at File2 for more details about
+See the endpoint options table for more details about
 read locks. +
  There are other solutions to avoid consuming files that are currently
 being written; for instance, you can write to a temporary
 destination and move the file after it has been written.
 
-For the `readLock=changed`, it relies only on the last modified,
+For the `readLock=changed`, it relies only on the last modified;
 furthermore a precision finer than 5 seconds might be problematic.
 
 When moving files using `move` or `preMove` option, the files are
 restricted to the share. That prevents consumer from moving files
-outside of the endpoint share.
+outside the endpoint share.
 
 === Exchange Properties
 
@@ -173,11 +173,11 @@ The consumer sets the following exchange properties
 |=======================================================================
 |Header |Description
 
-|`CamelBatchIndex` |Current index out of total number of files being consumed 
in this batch.
+|`CamelBatchIndex` | The current index out of total number of files being 
consumed in this batch.
 
-|`CamelBatchSize` |Total number of files being consumed in this batch.
+|`CamelBatchSize` |The total number of files being consumed in this batch.
 
-|`CamelBatchComplete` |True if there are no more files in this batch.
+|`CamelBatchComplete` | True if there are no more files in this batch.
 |=======================================================================
 
 == Producing Files
@@ -187,19 +187,19 @@ The Files producer is optimized for two body types:
   - `java.io.InputStream` if `CamelFileLength` header is set
   - `byte[]`
 
-In either case the remote file size is allocated
+In either case, the remote file size is allocated
 and then rewritten with body content. Any inconsistency between
 declared file length and stream length results in a corrupted
 remote file.
 
 === Limitations
 
-The underlying Azure Files service does not allow to grow files. The file
+The underlying Azure Files service does not allow growing files. The file
 length must be known at its creation time, consequently:
 
   - `CamelFileLength` header has an important
     meaning even for producers.
-  - No append mode is supported.
+  - No appending mode is supported.
 
 
 == About Timeouts
@@ -212,7 +212,7 @@ The `timeout` option only applies as the data timeout in 
millis.
 The meta-data operations timeout is minimum of:
 `readLockCheckInterval`, `timeout` and 20_000 millis.
 
-For now the files upload has no timeout. During the upload,
+For now, the file upload has no timeout. During the upload,
 the underlying library could log timeout warnings. They are
 recoverable and upload could continue.   
 
@@ -224,13 +224,13 @@ entire remote file content into memory as it is streamed 
directly into
 the local file using `FileOutputStream`.
 
 Camel will store to a local file with the same name as the remote file,
-though with `.inprogress` as extension while the file is being
-downloaded. Afterwards, the file is renamed to remove the `.inprogress`
-suffix. And finally, when the Exchange is complete
+though with `.inprogress` as an extension while the file is being
+downloaded. Afterward, the file is renamed to remove the `.inprogress`
+suffix. And finally, when the Exchange is complete,
 the local file is deleted.
 
 So if you want to download files from a remote files server and store it
-as local files then you need to route to a file endpoint such as:
+as local files, then you need to route to a file endpoint such as:
 
 [source,java]
 ----
@@ -252,8 +252,8 @@ the build in 
`org.apache.camel.component.file.GenericFileFilter` in
 Java. You can then configure the endpoint with such a filter to skip
 certain filters before being processed.
 
-In the sample we have built our own filter that only accepts files
-starting with report in the filename.
+In the sample, we have built our own filter that only accepts files
+starting with the report in the filename.
 
 And then we can configure our route using the *filter* attribute to
 reference our filter (using `#` notation) that we have defined in the
@@ -264,12 +264,12 @@ The accept(file) file argument has properties:
   - endpoint path: the share name such as `/samples`
   - relative path: a path to the file such as `subdir/a file`
   - directory: `true` if a directory
-  - file length: if not a directory then a length of the file in bytes
+  - file length: if not a directory, then a length of the file in bytes
 
 
 == Filtering using ANT path matcher
 
-The ANT path matcher is a filter that is shipped out-of-the-box in the
+The ANT path matcher is a filter shipped out-of-the-box in the
 *camel-spring* jar. So you need to depend on *camel-spring* if you are
 using Maven. +
  The reason is that we leverage Spring's
@@ -297,14 +297,14 @@ documentation.
 
 == Consuming a single file using a fixed name
 
-Unlike FTP component that features special combination of options:
+Unlike FTP component that features a special combination of options:
   
   - `useList=false`
   - `fileName=myFileName.txt`
   - `ignoreFileNotFoundOrPermissionError=true`
 
 to optimize _the single file using a fixed name_ use case,
-it is necessary to fallback to regular filters (i.e. the list
+it is necessary to fall back to regular filters (i.e. the list
 permission is needed). 
 
 

Reply via email to