This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector.git


The following commit(s) were added to refs/heads/master by this push:
     new 5d7cd11  chore(docs): cleanup wording and syntax
     new bc2e945  Merge pull request #88 from zregvart/pr/doc-cleanup
5d7cd11 is described below

commit 5d7cd11d860a619748b7373e5b3c3b269e089669
Author: Zoran Regvart <zregv...@apache.org>
AuthorDate: Tue Feb 11 14:16:35 2020 +0100

    chore(docs): cleanup wording and syntax
    
    This makes sure that the heading levels are consistent with the heading
    levels expected by Antora, i.e. page needs to start at heading 1 in
    order to have the title set.
    
    Makes sure that code block titles are set at the right place.
    
    Rewords the examples, the trying out locally in particular and
    structures the examples in sections.
---
 docs/modules/ROOT/pages/index.adoc                 |   2 +-
 docs/modules/ROOT/pages/try-it-out-locally.adoc    | 231 ++++++++++-----------
 .../try-it-out-on-openshift-with-strimzi.adoc      |  14 +-
 3 files changed, 122 insertions(+), 125 deletions(-)

diff --git a/docs/modules/ROOT/pages/index.adoc 
b/docs/modules/ROOT/pages/index.adoc
index 01018b6..c6e4079 100644
--- a/docs/modules/ROOT/pages/index.adoc
+++ b/docs/modules/ROOT/pages/index.adoc
@@ -1,4 +1,4 @@
-# About Apache Camel Kafka Connector
+= About Apache Camel Kafka Connector
 
 Camel Kafka Connector allows you to use all Camel 
xref:components::index.adoc[components] within the 
http://kafka.apache.org/documentation/#connect[Kafka Connect].
 
diff --git a/docs/modules/ROOT/pages/try-it-out-locally.adoc 
b/docs/modules/ROOT/pages/try-it-out-locally.adoc
index 11d493a..c0f9dc1 100644
--- a/docs/modules/ROOT/pages/try-it-out-locally.adoc
+++ b/docs/modules/ROOT/pages/try-it-out-locally.adoc
@@ -1,8 +1,9 @@
-== Try it out locally
+= Try it out locally
 
-Get a locally running kafka instance by following 
https://kafka.apache.org/quickstart[apache Kafka quickstart guide].
+== Run Kafka
+
+First, get a locally running kafka instance by following Apache Kafka 
https://kafka.apache.org/quickstart[quickstart guide]. This usually boils down 
to:
 
-==== This usually boils down to:
 .Set KAFKA_HOME
 [source,bash]
 ----
@@ -24,13 +25,19 @@ $KAFKA_HOME/bin/kafka-server-start.sh 
$KAFKA_HOME/config/server.properties
 .Create "mytopic" topic
 [source,bash]
 ----
-$KAFKA_HOME/bin/kafka-topics.sh --create --zookeeper localhost:2181 
--replication-factor 1 --partitions 1 --topic mytopic
+$KAFKA_HOME/bin/kafka-topics.sh --create \
+  --zookeeper localhost:2181 \
+  --replication-factor 1 \
+  --partitions 1 \
+  --topic mytopic
 ----
 
-==== Then run Camel kafka connectors source and/or syncs:
+Next, run Camel kafka connectors source and/or syncs:
+
 [NOTE]
 ====
 In order to run more than one instance of a standalone kafka connect on the 
same machine you neet to duplicate 
`$KAFKA_HOME/config/connect-standalone.properties` file changing the http port 
used for each instance:
+
 [source,bash]
 ----
 cp $KAFKA_HOME/config/connect-standalone.properties 
$KAFKA_HOME/config/connect-standalone2.properties
@@ -39,140 +46,121 @@ echo rest.port=<your unused port number> >> 
$KAFKA_HOME/config/connect-standalon
 ----
 ====
 
-.Run the default sink, just a camel logger:
+You can use these Kafka utilities to listen or produce from a Kafka topic:
+
+.Run an Kafka Consumer
 [source,bash]
 ----
-export CLASSPATH="$(find core/target/ -type f -name '*.jar'| grep '\-package' 
| tr '\n' ':')"
-$KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
config/CamelSinkConnector.properties 
+$KAFKA_HOME/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 
--topic mytopic --from-beginning
 ----
 
-.Run the default source, just a camel timer:
+.Run an interactive CLI kafka producer
 [source,bash]
 ----
-export CLASSPATH="$(find core/target/ -type f -name '*.jar'| grep '\-package' 
| tr '\n' ':')"
-$KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
config/CamelSourceConnector.properties
+$KAFKA_HOME/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic 
mytopic
 ----
 
-.Run the AWS Kinesis source:
-You can adjust properties in 
`examples/CamelAWSKinesisSourceConnector.properties` for example configuring 
access key, secret key and region
-by adding 
`camel.component.aws-kinesis.configuration.access-key=youraccesskey`, 
`camel.component.aws-kinesis.configuration.secret-key=yoursecretkey` and 
`camel.component.aws-kinesis.configuration.region=yourregion`
+== Try some examples
 
-[source,bash]
-----
-export CLASSPATH="$(find core/target/ -type f -name '*.jar'| grep '\-package' 
| tr '\n' ':')"
-$KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
examples/CamelAWSKinesisSourceConnector.properties
-----
+For the following examples you need to fetch the `camel-kafka-connector` 
project and 
https://github.com/apache/camel-kafka-connector/blob/master/README.adoc#build-the-project[build]
 it locally by running `./mvnw package` from the root of the project. Look into 
the `config` and `examples` directories for the configuration files 
(`*.properties`) of the examples showcased here.
 
-.Run the AWS SQS sink:
-You can adjust properties in `examples/CamelAWSSQSSinkConnector.properties` 
for example configuring access key, secret key and region
-by adding `camel.component.aws-sqs.configuration.access-key=youraccesskey`, 
`camel.component.aws-sqs.configuration.secret-key=yoursecretkey` and 
`camel.component.aws-sqs.configuration.region=yourregion`
+First you need to set the `CLASSPATH` environment variable to include the 
`jar` files from the 
`core/target/camel-kafka-connector-<<version>>-package/share/java/` directory. 
On UNIX systems this can be done by running:
 
 [source,bash]
 ----
 export CLASSPATH="$(find core/target/ -type f -name '*.jar'| grep '\-package' 
| tr '\n' ':')"
-$KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
examples/CamelAWSSQSSinkConnector.properties
 ----
 
-.Run the AWS SQS source:
-You can adjust properties in `examples/CamelAWSSQSSourceConnector.properties` 
for example configuring access key, secret key and region
-by adding `camel.component.aws-sqs.configuration.access-key=youraccesskey`, 
`camel.component.aws-sqs.configuration.secret-key=yoursecretkey` and 
`camel.component.aws-sqs.configuration.region=yourregion`
+=== Simple logger (sink)
+
+This is an example of a _sink_ that logs messages consumed from `mytopic`.
 
+.Run the default sink, just a camel logger:
 [source,bash]
 ----
-export CLASSPATH="$(find core/target/ -type f -name '*.jar'| grep '\-package' 
| tr '\n' ':')"
-$KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
examples/CamelAWSSQSSourceConnector.properties
+$KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
config/CamelSinkConnector.properties 
 ----
 
-.Run the AWS SNS sink:
-You can adjust properties in `examples/CamelAWSSNSSinkConnector.properties` 
for example configuring access key, secret key and region
-by adding `camel.component.aws-sns.configuration.access-key=youraccesskey`, 
`camel.component.aws-sns.configuration.secret-key=yoursecretkey` and 
`camel.component.aws-sns.configuration.region=yourregion`
+=== Timer (source)
+
+This is an example of a _source_ that produces a message every second to 
`mytopic`.
 
+.Run the default source, just a camel timer:
 [source,bash]
 ----
-export CLASSPATH="$(find core/target/ -type f -name '*.jar'| grep '\-package' 
| tr '\n' ':')"
-$KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
examples/CamelAWSSNSSinkConnector.properties
+$KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
config/CamelSourceConnector.properties
 ----
 
-.Run the AWS S3 source:
-You can adjust properties in `examples/CamelAWSS3SourceConnector.properties` 
for example configuring access key, secret key and region
-by adding `camel.component.aws-s3.configuration.access-key=youraccesskey`, 
`camel.component.aws-s3.configuration.secret-key=yoursecretkey` and 
`camel.component.aws-s3.configuration.region=yourregion`
-Here you also have a converter specific for S3Object.
+=== AWS Kinesis (source)
+
+This example consumes from AWS Kinesis data stream and transfers the payload 
to `mytopic` topic in Kafka.
 
+Adjust properties in `examples/CamelAWSKinesisSourceConnector.properties` for 
your environment, you need to configure access key, secret key and region by 
setting `camel.component.aws-kinesis.configuration.access-key=youraccesskey`, 
`camel.component.aws-kinesis.configuration.secret-key=yoursecretkey` and 
`camel.component.aws-kinesis.configuration.region=yourregion`.
+
+.Run the AWS Kinesis source:
 [source,bash]
 ----
-export CLASSPATH="$(find core/target/ -type f -name '*.jar'| grep '\-package' 
| tr '\n' ':')"
-$KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
examples/CamelAWSS3SourceConnector.properties
+$KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
examples/CamelAWSKinesisSourceConnector.properties
 ----
 
-.Run the cassandraql source:
+=== AWS SQS (sink)
 
-To run this example you'll need a bit more work:
+This example consumes from Kafka topic `mytopic` and transfers the payload to 
AWS SQS.
 
-First you'll need to run a cassandra instance:
+Adjust properties in `examples/CamelAWSSQSSinkConnector.properties` for your 
environment, you need to configure access key, secret key and region by setting 
`camel.component.aws-sqs.configuration.access-key=youraccesskey`, 
`camel.component.aws-sqs.configuration.secret-key=yoursecretkey` and 
`camel.component.aws-sqs.configuration.region=yourregion`
 
+.Run the AWS SQS sink:
 [source,bash]
 ----
-docker run --name master_node --env MAX_HEAP_SIZE='800M' -dt oscerd/cassandra
+$KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
examples/CamelAWSSQSSinkConnector.properties
 ----
 
-Check everything is fine:
+=== AWS SQS (source)
 
-[source,bash]
-----
-docker exec -ti master_node /opt/cassandra/bin/nodetool status
-Datacenter: datacenter1
-=======================
-Status=Up/Down
-|/ State=Normal/Leaving/Joining/Moving
---  Address     Load       Tokens       Owns (effective)  Host ID              
                 Rack
-UN  172.17.0.2  251.32 KiB  256          100.0%            
5126aaad-f143-43e9-920a-0f9540a93967  rack1
-----
+This example consumes from AWS SQS queue `mysqs` and transfers the payload to 
`mytopic` topic in Kafka.
 
-You'll need a local installation of cassandra, in particular the 3.11.4.
-Now we can populate the database:
+Adjust properties in `examples/CamelAWSSQSSourceConnector.properties` for your 
environment, you need to configure access key, secret key and region by setting 
`camel.component.aws-sqs.configuration.access-key=youraccesskey`, 
`camel.component.aws-sqs.configuration.secret-key=yoursecretkey` and 
`camel.component.aws-sqs.configuration.region=yourregion`
 
+.Run the AWS SQS source:
 [source,bash]
 ----
-<LOCAL_CASSANDRA_HOME>/bin/cqlsh $(docker inspect --format='{{ 
.NetworkSettings.IPAddress }}' master_node)
+$KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
examples/CamelAWSSQSSourceConnector.properties
 ----
 
-and run the script:
+=== AWS SNS (sink)
 
-[source,bash]
-----
-create keyspace test with replication = {'class':'SimpleStrategy', 
'replication_factor':3};
-use test;
-create table users ( id int primary key, name text );
-insert into users (id,name) values (1, 'oscerd');
-quit;
-----
+This example consumes from `mytopic` Kafka topic and transfers the payload to 
AWS SNS `topic` topic.
 
-The output of the following command should be used in the configuration of 
CamelCassandraQLSourceConnector.properties
+Adjust properties in `examples/CamelAWSSNSSinkConnector.properties` for your 
environment, you need to configure access key, secret key and region by setting 
`camel.component.aws-sns.configuration.access-key=youraccesskey`, 
`camel.component.aws-sns.configuration.secret-key=yoursecretkey` and 
`camel.component.aws-sns.configuration.region=yourregion`
 
+.Run the AWS SNS sink:
 [source,bash]
 ----
-<LOCAL_CASSANDRA_HOME>/bin/cqlsh $(docker inspect --format='{{ 
.NetworkSettings.IPAddress }}' master_node)
+$KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
examples/CamelAWSSNSSinkConnector.properties
 ----
 
-in particular it should be used as address instead of localhost in the 
`camel.source.url`
+=== AWS S3 (source)
+
+This example fetches objects from AWS S3 in the `camel-kafka-connector` bucket 
and transfers the payload to `mytopic` Kafka topic. This example shows how to 
implement a custom converter converting from bytes received from S3 to Kafka's 
`SchemaAndValue`.
+
+Adjust properties in `examples/CamelAWSS3SourceConnector.properties` for your 
environment, you need to configure access key, secret key and region by adding 
`camel.component.aws-s3.configuration.access-key=youraccesskey`, 
`camel.component.aws-s3.configuration.secret-key=yoursecretkey` and 
`camel.component.aws-s3.configuration.region=yourregion`
+
+.Run the AWS S3 source:
 [source,bash]
 ----
-export CLASSPATH="$(find core/target/ -type f -name '*.jar'| grep '\-package' 
| tr '\n' ':')"
-$KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
examples/CamelCassandraQLSourceConnector.properties
+$KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
examples/CamelAWSS3SourceConnector.properties
 ----
 
-.Run the cassandraql sink:
-
-To run this example you'll need a bit more work:
+=== Apache Cassandra
 
-First you'll need to run a cassandra instance:
+This examples require a running Cassandra instance, for simplicity the steps 
below show how to start Cassandra using Docker. First you'll need to run a 
Cassandra instance:
 
 [source,bash]
 ----
 docker run --name master_node --env MAX_HEAP_SIZE='800M' -dt oscerd/cassandra
 ----
 
-Check everything is fine:
+Next, check and make sure Cassandra is running:
 
 [source,bash]
 ----
@@ -185,122 +173,131 @@ Status=Up/Down
 UN  172.17.0.2  251.32 KiB  256          100.0%            
5126aaad-f143-43e9-920a-0f9540a93967  rack1
 ----
 
-You'll need a local installation of cassandra, in particular the 3.11.4.
-Now we can populate the database:
+To populate the database using to the `cqlsh` tool, you'll need a local 
installation of Cassandra. Download and extract the Apache Cassandra 
distribution to a directory. We reference the Cassandra installation directory 
with `LOCAL_CASSANDRA_HOME`. Here we use version 3.11.4 to connect to the 
Cassandra instance we started using Docker.
 
 [source,bash]
 ----
 <LOCAL_CASSANDRA_HOME>/bin/cqlsh $(docker inspect --format='{{ 
.NetworkSettings.IPAddress }}' master_node)
 ----
 
-and run the script:
+Next, execute the following script to create keyspace `test`, the table 
`users` and insert one row into it.
 
 [source,bash]
 ----
 create keyspace test with replication = {'class':'SimpleStrategy', 
'replication_factor':3};
 use test;
-create table users (id uuid primary key, name text );
-insert into users (id,name) values (now(), 'oscerd');
+create table users ( id int primary key, name text );
+insert into users (id,name) values (1, 'oscerd');
 quit;
 ----
 
-The output of the following command should be used in the configuration of 
CamelCassandraQLSourceConnector.properties
+In the configuration `.properties` file we use below the IP address of the 
Cassandra master node needs to be configured, replace the value `172.17.0.2` in 
the `camel.source.url` or `localhost` in `camel.sink.url` configuration 
property with the IP of the master node obtained from Docker. Each example uses 
a different `.properties` file shown in the command line to run the example.
 
 [source,bash]
 ----
-<LOCAL_CASSANDRA_HOME>/bin/cqlsh $(docker inspect --format='{{ 
.NetworkSettings.IPAddress }}' master_node)
+docker inspect --format='{{ .NetworkSettings.IPAddress }}' master_node
+----
+
+==== Apache Cassandra (source)
+
+This example polls Cassandra via CSQL (`select * from users`) in the `test` 
keyspace and transfers the result to the `mytopic` Kafka topic. 
+
+.Run the Cassandra CQL source:
+[source,bash]
 ----
+$KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
examples/CamelCassandraQLSourceConnector.properties
+----
+
+==== Apache Cassandra (sink)
+
+This example adds data to the `users` table in Cassandra from the data 
consumed from the `mytopic` Kafka topic. Notice how the `name` column is 
populated from the Kafka message using CQL comand `insert into users...`.
 
-in particular it should be used as address instead of localhost in the 
`camel.sink.url`
+.Run the Cassandra CQL sink:
 [source,bash]
 ----
-export CLASSPATH="$(find core/target/ -type f -name '*.jar'| grep '\-package' 
| tr '\n' ':')"
 $KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
examples/CamelCassandraQLSinkConnector.properties
 ----
 
-.Run the Elasticsearch sink:
-You can adjust properties in 
`examples/CamelElasticSearchSinkConnector.properties` for example configuring 
the hostAddresses.
+=== Elasticsearch (sink)
+
+This example passes data from `mytopic` Kafka topic to `sampleIndexName` index 
in Elasticsearch. Adjust properties in 
`examples/CamelElasticSearchSinkConnector.properties` to reflect your 
environment, for example change the `hostAddresses` to a valid Elasticsearch 
instance hostname and port.
 
 For the index operation, it might be necessary to provide or implement a 
`transformer`. A sample configuration would be similar to the one below:
+
 [source,bash]
 ----
 transforms=ElasticSearchTransformer
 ----
 
 This is the sample Transformer used in the integration test code that 
transforms Kafka's ConnectRecord to a Map:
+
 [source,bash]
 ----
 
transforms.ElasticSearchTransformer.type=org.apache.camel.kafkaconnector.sink.elasticsearch.transforms.ConnectRecordValueToMapTransformer
 ----
 
 This is a configuration for the sample transformer that defines the key used 
in the map:
+
 [source,bash]
 ----
 transforms.ElasticSearchTransformer.key=MyKey
 ----
 
 When the configuration is ready run the sink with:
+
+.Run the Elasticsearch sink:
 [source,bash]
 ----
-export CLASSPATH="$(find core/target/ -type f -name '*.jar'| grep '\-package' 
| tr '\n' ':')"
 $KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
examples/CamelElasticSearchSinkConnector.properties
 ----
 
-.Run the file sink, just a camel file appending to /tmp/kafkaconnect.txt:
+=== File (sink)
+
+This example appends data from `mytopic` Kafka topic to a file in 
`/tmp/kafkaconnect.txt`.
+
+.Run the file sink:
 [source,bash]
 ----
-export CLASSPATH="$(find core/target/ -type f -name '*.jar'| grep '\-package' 
| tr '\n' ':')"
 $KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
examples/CamelFileSinkConnector.properties
 ----
 
-.Run the http sink:
-You can adjust properties in `examples/CamelHttpSinkConnector.properties` for 
example configuring the called url.
+=== HTTP (sink)
+
+This example sends data from `mytopic` Kafka topic to a HTTP service. Adjust 
properties in `examples/CamelHttpSinkConnector.properties` for your 
environment, for example configuring the `camel.sink.url`. 
 
+.Run the http sink:
 [source,bash]
 ----
-export CLASSPATH="$(find core/target/ -type f -name '*.jar'| grep '\-package' 
| tr '\n' ':')"
 $KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
examples/CamelHttpSinkConnector.properties
 ----
 
-.Run the JMS source:
-You can adjust properties in `examples/CamelJmsSourceConnector.properties` for 
example configuring username and password
-by adding `camel.component.sjms2.connection-factory.userName=yourusername` and 
`camel.component.sjms2.connection-factory.password=yourpassword`
+=== JMS (source)
+
+This example receives messages from a JMS queue named `myqueue` and transfers 
them to `mytopic` Kafka topic. In this example ActiveMQ is used and it's 
configured to connect to the broker running on `localhost:61616`. Adjust 
properties in `examples/CamelJmsSourceConnector.properties` for your 
environment, for example configuring username and password by setting 
`camel.component.sjms2.connection-factory.userName=yourusername` and 
`camel.component.sjms2.connection-factory.password=yourpassw [...]
 
+.Run the JMS source:
 [source,bash]
 ----
-export CLASSPATH="$(find core/target/ -type f -name '*.jar'| grep '\-package' 
| tr '\n' ':')"
 $KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
examples/CamelJmsSourceConnector.properties
 ----
 
-.Run the JMS sink:
-You can adjust properties in `examples/CamelJmsSourceConnector.properties` for 
example configuring username and password
-by adding `camel.component.sjms2.connection-factory.userName=yourusername` and 
`camel.component.sjms2.connection-factory.password=yourpassword`
+=== JMS (sink)
+
+This example receives messages from `mytopic` Kafka topic and transfers them 
to JMS queue named `myqueue`. In this example ActiveMQ is used and it's 
configured to connect to the broker running on `localhost:61616`. You can 
adjust properties in `examples/CamelJmsSinkConnector.properties` for your 
environment, for example configure username and password by adding 
`camel.component.sjms2.connection-factory.userName=yourusername` and 
`camel.component.sjms2.connection-factory.password=yourpass [...]
 
+.Run the JMS sink:
 [source,bash]
 ----
-export CLASSPATH="$(find core/target/ -type f -name '*.jar'| grep '\-package' 
| tr '\n' ':')"
 $KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
examples/CamelJmsSinkConnector.properties
 ----
 
-.Run the telegram source:
-Change your telegram bot token in 
`examples/CamelTelegramSourceConnector.properties`
-
-[source,bash]
-----
-export CLASSPATH="$(find core/target/ -type f -name '*.jar'| grep '\-package' 
| tr '\n' ':')"
-$KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
examples/CamelTelegramSourceConnector.properties
-----
+=== Telegram (source)
 
-==== Listen or produce from a Kafka topic using Kafka utilities:
+This example transfers messages sent to Telegram bot to the `mytopic` Kafka 
topic. Adjust to set telegram bot token in 
`examples/CamelTelegramSourceConnector.properties` to reflect your bot's token.
 
-.Run an Kafka Consumer
+.Run the telegram source:
 [source,bash]
 ----
-$KAFKA_HOME/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 
--topic mytopic --from-beginning
+$KAFKA_HOME/bin/connect-standalone.sh 
$KAFKA_HOME/config/connect-standalone.properties 
examples/CamelTelegramSourceConnector.properties
 ----
 
-.Run an interactive CLI kafka producer
-[source,bash]
-----
-$KAFKA_HOME/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic 
mytopic
-----
diff --git a/docs/modules/ROOT/pages/try-it-out-on-openshift-with-strimzi.adoc 
b/docs/modules/ROOT/pages/try-it-out-on-openshift-with-strimzi.adoc
index e5856b3..d7752c6 100644
--- a/docs/modules/ROOT/pages/try-it-out-on-openshift-with-strimzi.adoc
+++ b/docs/modules/ROOT/pages/try-it-out-on-openshift-with-strimzi.adoc
@@ -1,10 +1,10 @@
-== Try it out on OpenShift with Strimzi
+= Try it out on OpenShift with Strimzi
 
-You can use the Camel Kafka connectors also on Kubernetes and OpenShift with 
the link:https://strimzi.io[Strimzi project].
+You can use the Camel Kafka connectors also on Kubernetes and OpenShift with 
the https://strimzi.io[Strimzi project].
 Strimzi provides a set of operators and container images for running Kafka on 
Kubernetes and OpenShift.
 The following example shows how to run it with Camel Kafka connectors on 
OpenShift.
 
-=== Deploy Kafka and Kafka Connect
+== Deploy Kafka and Kafka Connect
 
 First we install the Strimzi operator and use it to deploy the Kafka broker 
and Kafka Connect into our OpenShift project.
 We need to create security objects as part of installation so it is necessary 
to switch to admin user.
@@ -48,7 +48,7 @@ oc apply -f 
https://github.com/strimzi/strimzi-kafka-operator/raw/0.13.0/example
 oc apply -f 
https://github.com/strimzi/strimzi-kafka-operator/raw/0.13.0/examples/kafka-connect/kafka-connect-s2i-single-node-kafka.yaml
 ----
 
-=== Add Camel Kafka connector binaries
+== Add Camel Kafka connector binaries
 
 Strimzi uses Source2Image builds to allow users to add their own connectors to 
the existing Strimzi Docker images.
 We now need to build the connectors and add them to the image:
@@ -77,7 +77,7 @@ You should see something like this:
 
[{"class":"org.apache.camel.kafkaconnector.CamelSinkConnector","type":"sink","version":"0.0.1-SNAPSHOT"},{"class":"org.apache.camel.kafkaconnector.CamelSourceConnector","type":"source","version":"0.0.1-SNAPSHOT"},{"class":"org.apache.kafka.connect.file.FileStreamSinkConnector","type":"sink","version":"2.3.0"},{"class":"org.apache.kafka.connect.file.FileStreamSourceConnector","type":"source","version":"2.3.0"}]
 ----
 
-=== Create connector instance
+== Create connector instance
 
 Now we can create some instance of a connector plugin - got example of the S3 
connector:
 
@@ -112,11 +112,11 @@ You can check that status of the connector using
 oc exec -i -c kafka my-cluster-kafka-0 -- curl -s 
http://my-connect-cluster-connect-api:8083/connectors/s3-connector/status
 ----
 
-=== Check received messages
+== Check received messages
 
 You can also run the Kafka console consumer to see the messages received from 
the topic:
 
 [source,bash,options="nowrap"]
 ----
 oc exec -i -c kafka my-cluster-kafka-0 -- bin/kafka-console-consumer.sh 
--bootstrap-server localhost:9092 --topic s3-topic --from-beginning
-----
\ No newline at end of file
+----

Reply via email to