This is an automated email from the ASF dual-hosted git repository.
davsclaus pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel.git
The following commit(s) were added to refs/heads/main by this push:
new bbb439f CAMEL-16861: Cleanup and update EIP docs
bbb439f is described below
commit bbb439ff9cf3349e02894e575adc7d89cd1e73a2
Author: Claus Ibsen <[email protected]>
AuthorDate: Wed Sep 15 16:32:19 2021 +0200
CAMEL-16861: Cleanup and update EIP docs
---
.../docs/modules/eips/pages/claimCheck-eip.adoc | 39 ++++++++++-------
.../modules/eips/pages/competing-consumers.adoc | 49 +++++++++++++++-------
2 files changed, 60 insertions(+), 28 deletions(-)
diff --git
a/core/camel-core-engine/src/main/docs/modules/eips/pages/claimCheck-eip.adoc
b/core/camel-core-engine/src/main/docs/modules/eips/pages/claimCheck-eip.adoc
index f3cf3ef..ce1ac31 100644
---
a/core/camel-core-engine/src/main/docs/modules/eips/pages/claimCheck-eip.adoc
+++
b/core/camel-core-engine/src/main/docs/modules/eips/pages/claimCheck-eip.adoc
@@ -36,11 +36,11 @@ The Claim Check EIP supports 5 options which are listed
below:
When using this EIP you must specify the operation to use which can be of the
following:
-* Get - Gets (does not remove) the claim check by the given key.
-* GetAndRemove - Gets and remove the claim check by the given key.
-* Set - Sets a new (will override if key already exists) claim check with the
given key.
-* Push - Sets a new claim check on the stack (does not use key).
-* Pop - Gets the latest claim check from the stack (does not use key).
+* *Get* - Gets (does not remove) the claim check by the given key.
+* *GetAndRemove* - Gets and remove the claim check by the given key.
+* *Set* - Sets a new (will override if key already exists) claim check with
the given key.
+* *Push* - Sets a new claim check on the stack (does not use key).
+* *Pop* - Gets the latest claim check from the stack (does not use key).
When using the `Get`, `GetAndRemove`, or `Set` operation you must specify a
key.
These operations will then store and retrieve the data using this key. You can
use this to store multiple data in different keys.
@@ -48,10 +48,11 @@ These operations will then store and retrieve the data
using this key. You can u
The `Push` and `Pop` operations do *not* use a key but stores the data in a
stack structure.
-== Filter what data to merge back
+== Merging data using get or pop operation
-The `filter` option is used to define what data to merge back when using the
`Get` or `Pop` operation. When data is merged back
-then its merged using a `AggregationStrategy`. The default strategy uses the
`filter` option to easily specify what data to merge back.
+The `Get`, `GetAndRemove` and `Pop` operations will claim data back from the
claim check repository.
+The data is then merged with the current data on the exchange, this is done
with an `AggregationStrategy`.
+The default strategy uses the `filter` option to easily specify what data to
merge back.
The `filter` option takes a `String` value with the following syntax:
@@ -62,7 +63,7 @@ The `filter` option takes a `String` value with the following
syntax:
The pattern rule supports wildcard and regular expression:
-* wildcard match (pattern ends with a `*` and the name starts with the pattern)
+* wildcard match (pattern ends with a `*, and the name starts with the pattern)
* regular expression match
You can specify multiple rules separated by comma.
@@ -71,30 +72,35 @@ You can specify multiple rules separated by comma.
For example to include the message body and all headers starting with _foo_:
+[source,text]
----
body,header:foo*
----
To only merge back the message body:
+[source,text]
----
body
----
To only merge back the message attachments:
+[source,text]
----
attachments
----
To only merge back headers:
+[source,text]
----
headers
----
To only merge back a header name foo:
+[source,text]
----
header:foo
----
@@ -103,7 +109,7 @@ If the filter rule is specified as empty or as wildcard
then everything is merge
Notice that when merging back data, then any existing data is overwritten, and
any other existing data is preserved.
-=== Fine grained filtering with include and exclude pattern
+=== Filtering with include and exclude patterns
The syntax also supports the following prefixes which can be used to specify
include,exclude, or remove
@@ -112,16 +118,22 @@ The syntax also supports the following prefixes which can
be used to specify inc
* `--` = to remove (remove takes precedence)
For example to skip the message body, and merge back everything else
+
+[source,text]
----
-body
----
Or to skip the message header foo, and merge back everything else
+
+[source,text]
----
-header:foo
----
-You can also instruct to remove headers when merging data back, for example to
remove all headers starting with _bar_:
+You can also instruct removing headers when merging data back, for example to
remove all headers starting with _bar_:
+
+[source,text]
----
--headers:bar*
----
@@ -148,8 +160,7 @@ from("direct:start")
.to("mock:e");
----
-
-== Java Examples
+== Example
The following example shows the `Push` and `Pop` operations in action;
@@ -204,7 +215,7 @@ from("direct:start")
.to("mock:c");
----
-== XML examples
+=== XML examples
The following example shows the `Push` and `Pop` operations in action;
diff --git
a/core/camel-core-engine/src/main/docs/modules/eips/pages/competing-consumers.adoc
b/core/camel-core-engine/src/main/docs/modules/eips/pages/competing-consumers.adoc
index 845b39e..0bbf168 100644
---
a/core/camel-core-engine/src/main/docs/modules/eips/pages/competing-consumers.adoc
+++
b/core/camel-core-engine/src/main/docs/modules/eips/pages/competing-consumers.adoc
@@ -10,10 +10,10 @@ For example from SEDA, JMS, Kafka, and various AWS
components.
image::eip/CompetingConsumers.gif[image]
-- SEDA for SEDA based concurrent processing using a thread pool
-- JMS for distributed SEDA based concurrent processing with queues which
support reliable load balancing, failover and clustering.
+- xref:components::seda-component.adoc[SEDA] for SEDA based concurrent
processing using a thread pool
+- xref:components::jms-component.adoc[JMS] for distributed SEDA based
concurrent processing with queues which support reliable load balancing,
failover and clustering.
-For components which does not allow concurrent consumers, then Camel allows to
route from the consumer
+For components which does not allow concurrent consumers, then Camel allows
routeing from the consumer
to a thread-pool which can then further process the message concurrently,
which then simulates a _quasi like_ competing consumers.
@@ -54,8 +54,18 @@ from("file://inbox?move=../backup-${date:now:yyyyMMdd}")
.to("bean:calculateBean");
----
+And in XML DSL
+
+[source,xml]
+----
+<route>
+ <from uri="file://inbox?move=../backup-${date:now:yyyyMMdd}"/>
+ <to uri="bean:calculateBean"/>
+</route>
+----
+
The route is synchronous and there is only a single consumer running at any
given time.
-This scenario is well known and it doesn't affect thread safety as we only
have one active thread
+This scenario is well known, and it doesn't affect thread safety as we only
have one active thread
involved at any given time.
Now imagine that the inbox folder is filled with filers quicker than we can
process.
@@ -63,17 +73,17 @@ So we want to speed up this process. How can we do this?
Well we could try adding a 2nd route with the same route path.
Well that doesn't work so well as we have competing consumers for the same
files.
-That requires however that we use file locking so we wont have two consumers
compete for the same file.
-By default Camel support this with its file locking option on the file
component.
+That requires however that we use file locking, so we won't have two consumers
compete for the same file.
+By default, Camel support this with its file locking option on the file
component.
-But what if the component doesn't support this, or its not possible to add a
2nd consumer
-for the same endpoint? And yes its a bit of a hack and the route logic code is
duplicated.
+What if the component doesn't support this, or it's not possible to add a 2nd
consumer
+for the same endpoint? And yes it's _a bit of a hack_, and the route logic
code is duplicated.
And what if we need more, then we need to add a 3rd, a 4th and so on.
-What if the processing of the file itself is the bottleneck? That is the
calculateBean is slow.
+What if the processing of the file itself is the bottleneck, i.e. the
`calculateBean` is slow?
So how can we process messages with this bean concurrently?
-Yeah we can use the xref:threads-eip.adoc[Threads EIP], so if we insert it in
the route we get:
+We can use the xref:threads-eip.adoc[Threads EIP], so if we insert it in the
route we get:
[source,java]
----
@@ -82,16 +92,27 @@ from("file://inbox?move=../backup-${date:now:yyyyMMdd}")
.to("bean:calculateBean");
----
+And in XML DSL
+
+[source,xml]
+----
+<route>
+ <from uri="file://inbox?move=../backup-${date:now:yyyyMMdd}"/>
+ <threads poolSize="10"/>
+ <to uri="bean:calculateBean"/>
+</route>
+----
+
So by inserting `threads(10)` we have instructed Camel that from this point
forward in the route
it should use a thread pool with up till 10 concurrent threads.
-So when the file consumer delivers a message to the threads, then the threads
take it from there
+So when the file consumer delivers a message to the threads, then the threads
take it from there,
and the file consumer can return and continue to poll the next file.
By leveraging this fact we can still use a single file consumer to poll new
files.
And polling a directory to just grab the file handle is very fast.
-And we wont have problem with file locking, sorting, filtering and whatnot.
+And we won't have problem with file locking, sorting, filtering and whatnot.
And at the same time we can leverage the fact that we can process the file
messages concurrently
-by the calculate bean.
+by the `calculateBean` bean.
Here at the end lets take a closer look what happens with the synchronous
thread and the
asynchronous thread. The synchronous thread hands over the exchange to the new
asynchronous thread and as
@@ -99,7 +120,7 @@ such the synchronous thread completes. The asynchronous
thread is then routing a
And when this thread finishes it will take care of the file completion
strategy to move the file
into the backup folder. This is an important note, that the on completion is
done by the asynchronous thread.
-This ensures the file is not moved before the file is processed successfully.
Suppose the calculate bean
+This ensures the file is not moved before the file is processed successfully.
Suppose the `calculateBean` bean
could not process one of the files. If it was the asynchronous thread that
should do the on completion strategy
then the file would have been moved to early into the backup folder. By
handing over this to the asynchronous
thread we do it after we have processed the message completely