This is an automated email from the ASF dual-hosted git repository. davsclaus pushed a commit to branch main in repository https://gitbox.apache.org/repos/asf/camel.git
The following commit(s) were added to refs/heads/main by this push: new 696aaf1ee8d Polished doc 696aaf1ee8d is described below commit 696aaf1ee8dcea24084a949a6b53b934e8197074 Author: Claus Ibsen <claus.ib...@gmail.com> AuthorDate: Mon Apr 24 20:45:22 2023 +0200 Polished doc --- .../camel-sql/src/main/docs/sql-component.adoc | 107 +++++++++------------ 1 file changed, 45 insertions(+), 62 deletions(-) diff --git a/components/camel-sql/src/main/docs/sql-component.adoc b/components/camel-sql/src/main/docs/sql-component.adoc index 301e477a739..74388dabc70 100644 --- a/components/camel-sql/src/main/docs/sql-component.adoc +++ b/components/camel-sql/src/main/docs/sql-component.adoc @@ -36,11 +36,8 @@ for this component: The SQL component also supports: -* a JDBC based repository for the -Idempotent Consumer EIP pattern. See -further below. -* a JDBC based repository for the Aggregator EIP -pattern. See further below. +* a JDBC based repository for the Idempotent Consumer EIP pattern. See further below. +* a JDBC based repository for the Aggregator EIP pattern. See further below. == URI format @@ -56,8 +53,7 @@ The SQL component uses the following endpoint URI notation: sql:select * from table where id=# order by name[?options] ---- -You can use named parameters by using -:`#name_of_the_parameter` style as shown: +You can use named parameters by using `:#name_of_the_parameter` style as shown: ---- sql:select * from table where id=:#myId order by name[?options] @@ -66,13 +62,13 @@ sql:select * from table where id=:#myId order by name[?options] When using named parameters, Camel will lookup the names from, in the given precedence: -1. from message body if its a `java.util.Map` -2. from message headers +1. from a xref:languages:simple-language.adoc[Simple] expressions +2. from message body if its a `java.util.Map` +3. from message headers If a named parameter cannot be resolved, then an exception is thrown. -You can use Simple expressions as parameters as -shown: +You can use xref:languages:simple-language.adoc[Simple] expressions as parameters as shown: ---- sql:select * from table where id=:#${exchangeProperty.myId} order by name[?options] @@ -83,14 +79,13 @@ SQL query is substituted with the `pass:[#]` symbol, because the `?` symbol is used to specify options for the endpoint. The `?` symbol replacement can be configured on endpoint basis. -You can externalize your SQL queries to files -in the classpath or file system as shown: +You can externalize your SQL queries to files in the classpath or file system as shown: ---- sql:classpath:sql/myquery.sql[?options] ---- -And the myquery.sql file is in the classpath and is just a plain text +And the `myquery.sql` file is in the classpath and is just a plain text [source,sql] ---- @@ -107,7 +102,6 @@ In the file you can use multilines and format the SQL as you wish. And also use comments such as the – dash line. - // component-configure options: START // component-configure options: END @@ -156,7 +150,7 @@ depending on the presence of query parameters. For `select` operations, the result is an instance of `List<Map<String, Object>>` type, as returned by the -http://static.springframework.org/spring/docs/2.5.x/api/org/springframework/jdbc/core/JdbcTemplate.html#queryForList(java.lang.String,%20java.lang.Object%91%93)[JdbcTemplate.queryForList()] +http://static.springframework.org/spring/docs/6.0.x/api/org/springframework/jdbc/core/JdbcTemplate.html#queryForList(java.lang.String,%20java.lang.Object%91%93)[JdbcTemplate.queryForList()] method. For `update` operations, a `NULL` body is returned as the `update` operation is only set as a header and never as a body. By default, the result is placed in the message body. If the @@ -175,7 +169,7 @@ from("jms:order.inbox") == Using StreamList -The producer supports outputType=StreamList +The producer supports `outputType=StreamList` that uses an iterator to stream the output of the query. This allows to process the data in a streaming fashion which for example can be used by the Splitter EIP to process each row one at a time, @@ -184,32 +178,28 @@ and load data from the database as needed. [source,java] ---- from("direct:withSplitModel") - .to("sql:select * from projects order by id?outputType=StreamList&outputClass=org.apache.camel.component.sql.ProjectModel") - .to("log:stream") - .split(body()).streaming() - .to("log:row") - .to("mock:result") - .end(); + .to("sql:select * from projects order by id?outputType=StreamList&outputClass=org.apache.camel.component.sql.ProjectModel") + .to("log:stream") + .split(body()).streaming() + .to("log:row") + .to("mock:result") + .end(); ---- == Generated keys -*Since Camel 2.12.4, 2.13.1 and 2.14* - If you insert data using SQL INSERT, then the RDBMS may support auto generated keys. You can instruct the SQL producer to return the -generated keys in headers. + - To do that set the header `CamelSqlRetrieveGeneratedKeys=true`. Then -the generated keys will be provided as headers with the keys listed in -the table above. +generated keys in headers. To do that set the header `CamelSqlRetrieveGeneratedKeys=true`. +Then the generated keys will be provided as headers with the keys listed in the table above. To specify which generated columns should be retrieved, set the header `CamelSqlGeneratedColumns` to a `String[]` or `int[]`, indicating the column names or indexes, respectively. Some databases -requires this, such as Oracle. It may also be necessary to use the `parametersCount` option if the +require this, such as Oracle. It may also be necessary to use the `parametersCount` option if the driver cannot correctly determine the number of parameters. You can see more details in this -https://gitbox.apache.org/repos/asf?p=camel.git;a=blob;f=components/camel-sql/src/test/java/org/apache/camel/component/sql/SqlGeneratedKeysTest.java;h=54c19b7332bb0aa81ee24ff3d3a66885a6b9e9aa;hb=HEAD[unit test]. +https://gitbox.apache.org/repos/asf?p=camel.git;a=blob_plain;f=components/camel-sql/src/test/java/org/apache/camel/component/sql/SqlGeneratedKeysTest.java;hb=HEAD[unit test]. == DataSource @@ -222,22 +212,18 @@ sql:select * from table where id=# order by name?dataSource=#myDS == Using named parameters -*Since Camel 2.11* - In the given route below, we want to get all the projects from the -projects table. Notice the SQL query has 2 named parameters, :#lic and -:#min. + - Camel will then lookup for these parameters from the message body or +projects table. Notice the SQL query has 2 named parameters, `:#lic` and +`:#min`. Camel will then look up for these parameters from the message body or message headers. Notice in the example above we set two headers with -constant value + - for the named parameters: +constant value for the named parameters: [source,java] ---- - from("direct:projects") - .setHeader("lic", constant("ASF")) - .setHeader("min", constant(123)) - .to("sql:select * from projects where license = :#lic and id > :#min order by id") +from("direct:projects") + .setHeader("lic", constant("ASF")) + .setHeader("min", constant(123)) + .to("sql:select * from projects where license = :#lic and id > :#min order by id") ---- Though if the message body is a `java.util.Map` then the named @@ -245,14 +231,12 @@ parameters will be taken from the body. [source,java] ---- - from("direct:projects") - .to("sql:select * from projects where license = :#lic and id > :#min order by id") +from("direct:projects") + .to("sql:select * from projects where license = :#lic and id > :#min order by id") ---- == Using expression parameters in producers -*Since Camel 2.14* - In the given route below, we want to get all the project from the database. It uses the body of the exchange for defining the license and uses the value of a property as the second parameter. @@ -289,6 +273,8 @@ public static class MyIdGenerator { public int nextId() { return id++; } + +} ---- Notice that there is no existing `Exchange` with message body and headers, so @@ -358,7 +344,6 @@ from("direct:query") .to("mock:query"); ---- - == Using the JDBC based idempotent repository In this section we will use the JDBC based @@ -436,7 +421,7 @@ the second one is the message id (`String`). |=== The option `tableName` can be used to use the default SQL queries but with a different table name. -However if you want to customize the SQL queries then you can configure each of them individually. +However, if you want to customize the SQL queries then you can configure each of them individually. === Orphan Lock aware Jdbc IdempotentRepository @@ -460,7 +445,7 @@ This repository has two additional configuration parameters === Caching Jdbc IdempotentRepository -Some SQL implementations are not fast on a per query basis. The +Some SQL implementations are not fast on a per-query basis. The `JdbcMessageIdRepository` implementation does its idempotent checks individually within SQL transactions. Checking a mere 100 keys can take minutes. The `JdbcCachedMessageIdRepository` preloads an in-memory @@ -481,14 +466,12 @@ persistent support for the Aggregator. Only when an Exchange has been successfully processed it will be marked as complete which happens when the `confirm` method is invoked on the `AggregationRepository`. This means if the same -Exchange fails again it will be kept retried until -it success. +Exchange fails again it will be kept retried until success. You can use option `maximumRedeliveries` to limit the maximum number of redelivery attempts for a given recovered Exchange. You must also set the `deadLetterUri` option so Camel knows where to -send the Exchange when the `maximumRedeliveries` was -hit. +send the Exchange when the `maximumRedeliveries` was hit. You can see some examples in the unit tests of camel-sql, for example `JdbcAggregateRecoverDeadLetterChannelTest.java` @@ -502,10 +485,10 @@ following example aggregation will be used. The table structure definition of both table are identical: in both case a String value is used as key (*id*) whereas a Blob contains the -exchange serialized in byte array. + - However one difference should be remembered: the *id* field does not -have the same content depending on the table. + - In the aggregation table *id* holds the correlation Id used by the +exchange serialized in byte array. +However, one difference should be remembered: the *id* field does not +have the same content depending on the table. +In the aggregation table *id* holds the correlation id used by the component to aggregate the messages. In the completed table, *id* holds the id of the exchange stored in corresponding the blob field. @@ -558,8 +541,7 @@ CREATE TABLE aggregationRepo3_completed ( ); ---- -And then configure the repository to enable this behavior as shown -below: +And then configure the repository to enable this behavior as shown below: [source,xml] ---- @@ -707,6 +689,7 @@ Propagation is specified by constants of `org.springframework.transaction.Transa so `propagationBehaviorName` is convenient setter that allows to use names of the constants. === Clustering + JdbcAggregationRepository does not provide recovery in a clustered environment. You may use ClusteredJdbcAggregationRepository that provides a limited support for recovery in a clustered environment : recovery mechanism is dealt separately by members of the cluster, i.e. a member may only recover exchanges that it completed itself. @@ -715,7 +698,7 @@ To enable this behaviour, property `recoverByInstance` must be set to true, and Besides, completed table must have a `instance_id VARCHAR(255)` column. -N.B. : since each member is the only responsible for the recovery of its completed exchanges, if a member is stopped, its completed exchanges will not be recovered until it is restarted, unless you update completed table to affect them to another member (by changing `instance_id` for those completed exchanges). +NOTE: Since each member is the only responsible for the recovery of its completed exchanges, if a member is stopped, its completed exchanges will not be recovered until it is restarted, unless you update completed table to affect them to another member (by changing `instance_id` for those completed exchanges). === PostgreSQL case @@ -745,9 +728,9 @@ but without marking PostgreSQL connection as invalid. A starter module is available to spring-boot users. When using the starter, the `DataSource` can be directly configured using spring-boot properties. -[source,java] +[source,properties] ---- -# Example for a mysql datasource +// Example for a mysql datasource spring.datasource.url=jdbc:mysql://localhost/test spring.datasource.username=dbuser spring.datasource.password=dbpass