This is an automated email from the ASF dual-hosted git repository. cegerton pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/kafka-site.git
The following commit(s) were added to refs/heads/asf-site by this push: new 9abce22e MINOR: Fix anchor links in Connect docs (#491) 9abce22e is described below commit 9abce22e856c2959e55ec546d7a779a84704c7b0 Author: Chris Egerton <chr...@aiven.io> AuthorDate: Mon Feb 13 10:19:37 2023 -0500 MINOR: Fix anchor links in Connect docs (#491) Reviewers: Bill Bejeck <bbej...@gmail.com> --- 33/connect.html | 4 ++-- 34/connect.html | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/33/connect.html b/33/connect.html index 4d14b606..fa3f511c 100644 --- a/33/connect.html +++ b/33/connect.html @@ -377,7 +377,7 @@ errors.tolerance=all</pre> <p>If a sink connector supports exactly-once semantics, to enable exactly-once at the Connect worker level, you must ensure its consumer group is configured to ignore records in aborted transactions. You can do this by setting the worker property <code>consumer.isolation.level</code> to <code>read_committed</code> or, if running a version of Kafka Connect that supports it, using a <a href="#connectconfigs_connector.client.config.override.policy">connector client config override polic [...] - <h5><a id="connect_exactlyoncesource" href="connect_exactlyoncesource">Source connectors</a></h5> + <h5><a id="connect_exactlyoncesource" href="#connect_exactlyoncesource">Source connectors</a></h5> <p>If a source connector supports exactly-once semantics, you must configure your Connect cluster to enable framework-level support for exactly-once source connectors. Additional ACLs may be necessary if running against a secured Kafka cluster. Note that exactly-once support for source connectors is currently only available in distributed mode; standalone Connect workers cannot provide exactly-once semantics.</p> @@ -641,7 +641,7 @@ public abstract class SinkTask implements Task { <p>The <code>flush()</code> method is used during the offset commit process, which allows tasks to recover from failures and resume from a safe point such that no events will be missed. The method should push any outstanding data to the destination system and then block until the write has been acknowledged. The <code>offsets</code> parameter can often be ignored, but is useful in some cases where implementations want to store offset information in the destination store to provide ex [...] delivery. For example, an HDFS connector could do this and use atomic move operations to make sure the <code>flush()</code> operation atomically commits the data and offsets to a final location in HDFS.</p> - <h5><a id="connect_errantrecordreporter" href="connect_errantrecordreporter">Errant Record Reporter</a></h5> + <h5><a id="connect_errantrecordreporter" href="#connect_errantrecordreporter">Errant Record Reporter</a></h5> <p>When <a href="#connect_errorreporting">error reporting</a> is enabled for a connector, the connector can use an <code>ErrantRecordReporter</code> to report problems with individual records sent to a sink connector. The following example shows how a connector's <code>SinkTask</code> subclass might obtain and use the <code>ErrantRecordReporter</code>, safely handling a null reporter when the DLQ is not enabled or when the connector is installed in an older Connect runtime that doesn [...] diff --git a/34/connect.html b/34/connect.html index 8709e4c4..089fadb4 100644 --- a/34/connect.html +++ b/34/connect.html @@ -377,7 +377,7 @@ errors.tolerance=all</pre> <p>If a sink connector supports exactly-once semantics, to enable exactly-once at the Connect worker level, you must ensure its consumer group is configured to ignore records in aborted transactions. You can do this by setting the worker property <code>consumer.isolation.level</code> to <code>read_committed</code> or, if running a version of Kafka Connect that supports it, using a <a href="#connectconfigs_connector.client.config.override.policy">connector client config override polic [...] - <h5><a id="connect_exactlyoncesource" href="connect_exactlyoncesource">Source connectors</a></h5> + <h5><a id="connect_exactlyoncesource" href="#connect_exactlyoncesource">Source connectors</a></h5> <p>If a source connector supports exactly-once semantics, you must configure your Connect cluster to enable framework-level support for exactly-once source connectors. Additional ACLs may be necessary if running against a secured Kafka cluster. Note that exactly-once support for source connectors is currently only available in distributed mode; standalone Connect workers cannot provide exactly-once semantics.</p> @@ -648,7 +648,7 @@ public abstract class SinkTask implements Task { <p>The <code>flush()</code> method is used during the offset commit process, which allows tasks to recover from failures and resume from a safe point such that no events will be missed. The method should push any outstanding data to the destination system and then block until the write has been acknowledged. The <code>offsets</code> parameter can often be ignored, but is useful in some cases where implementations want to store offset information in the destination store to provide ex [...] delivery. For example, an HDFS connector could do this and use atomic move operations to make sure the <code>flush()</code> operation atomically commits the data and offsets to a final location in HDFS.</p> - <h5><a id="connect_errantrecordreporter" href="connect_errantrecordreporter">Errant Record Reporter</a></h5> + <h5><a id="connect_errantrecordreporter" href="#connect_errantrecordreporter">Errant Record Reporter</a></h5> <p>When <a href="#connect_errorreporting">error reporting</a> is enabled for a connector, the connector can use an <code>ErrantRecordReporter</code> to report problems with individual records sent to a sink connector. The following example shows how a connector's <code>SinkTask</code> subclass might obtain and use the <code>ErrantRecordReporter</code>, safely handling a null reporter when the DLQ is not enabled or when the connector is installed in an older Connect runtime that doesn [...]