This is an automated email from the ASF dual-hosted git repository.

liuyu pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/pulsar-site.git


The following commit(s) were added to refs/heads/main by this push:
     new b4e1605f038 [fix][doc] Align the apostrophe throughout the docs
b4e1605f038 is described below

commit b4e1605f03814df8df7b677b9ecc3db34d3d5dc4
Author: Jun Ma <60642177+momo-...@users.noreply.github.com>
AuthorDate: Thu Feb 9 16:37:50 2023 +0800

    [fix][doc] Align the apostrophe throughout the docs
---
 docs/about.md                           |  7 +++----
 docs/admin-api-transactions.md          |  2 +-
 docs/administration-isolation-bookie.md |  2 +-
 docs/administration-zk-bk.md            |  2 +-
 docs/client-api-overview.md             |  2 +-
 docs/concepts-cluster-level-failover.md |  2 +-
 docs/concepts-messaging.md              |  6 +++---
 docs/concepts-replication.md            |  4 ++--
 docs/cookbooks-retention-expiry.md      |  2 +-
 docs/developing-binary-protocol.md      |  2 +-
 docs/functions-concepts.md              |  4 ++--
 docs/functions-quickstart.md            |  4 ++--
 docs/io-cdc-debezium.md                 |  2 +-
 docs/io-cli.md                          | 20 ++++++++++----------
 docs/io-debezium-source.md              |  2 +-
 docs/reference-pulsar-admin.md          | 16 ++++++++--------
 docs/schema-overview.md                 |  4 ++--
 docs/schema-understand.md               |  6 +++---
 docs/security-kerberos.md               |  2 +-
 docs/security-tls-transport.md          |  2 +-
 docs/txn-how.md                         | 12 ++++++------
 docs/txn-use.md                         |  2 +-
 22 files changed, 53 insertions(+), 54 deletions(-)

diff --git a/docs/about.md b/docs/about.md
index b102dda3167..82fb693f39a 100644
--- a/docs/about.md
+++ b/docs/about.md
@@ -36,12 +36,12 @@ Select one of the content blocks below to begin your Pulsar 
journey. If you ...
 ## Continuous Improvement
 ***
 
-As you probably know, we are working on a new user experience for our 
documentation portal that will make learning about and building on top of 
Apache Pulsar a much better experience. Whether you need overview concepts, 
how-to procedures, curated guides or quick references, we’re building content 
to support it. This welcome page is just the first step. We will be providing 
updates every month.
+As you probably know, we are working on a new user experience for our 
documentation portal that will make learning about and building on top of 
Apache Pulsar a much better experience. Whether you need overview concepts, 
how-to procedures, curated guides or quick references, we're building content 
to support it. This welcome page is just the first step. We will be providing 
updates every month.
 
 ## Help Improve These Documents
 ***
 
-You’ll notice an Edit button at the bottom and top of each page. Click it to 
open a landing page with instructions for requesting changes to posted 
documents. These are your resources. Participation is not only welcomed – it’s 
essential! 
+You'll notice an Edit button at the bottom and top of each page. Click it to 
open a landing page with instructions for requesting changes to posted 
documents. These are your resources. Participation is not only welcomed – it's 
essential! 
 
 :::tip
 
@@ -54,5 +54,4 @@ For how to make contributions to documentation, see [Pulsar 
Documentation Contri
 
 The Pulsar community on GitHub is active, passionate, and knowledgeable.  Join 
discussions, voice opinions, suggest features, and dive into the code itself. 
Find your Pulsar family here at 
[apache/pulsar](https://github.com/apache/pulsar).
 
-An equally passionate community can be found in the [Pulsar Slack 
channel](https://apache-pulsar.slack.com/). You’ll need an invitation to join, 
but many Github Pulsar community members are Slack members too.  Join, hang 
out, learn, and make some new friends.
-
+An equally passionate community can be found in the [Pulsar Slack 
channel](https://apache-pulsar.slack.com/). You'll need an invitation to join, 
but many Github Pulsar community members are Slack members too.  Join, hang 
out, learn, and make some new friends.
\ No newline at end of file
diff --git a/docs/admin-api-transactions.md b/docs/admin-api-transactions.md
index 818409454e3..f80b37000c4 100644
--- a/docs/admin-api-transactions.md
+++ b/docs/admin-api-transactions.md
@@ -390,7 +390,7 @@ The coordinator's internal stats that can be retrieved 
include:
 * **managedLedgerName:** The name of the managed ledger where the transaction 
coordinator log is stored. 
 * **managedLedgerInternalStats:** The internal stats of the managed ledger 
where the transaction coordinator log is stored. See 
`[managedLedgerInternalStats](admin-api-topics.md#get-internal-stats)` for more 
details.
 
-Use one of the following ways to get coordinator’s internal stats:
+Use one of the following ways to get coordinator's internal stats:
 ````mdx-code-block
 <Tabs groupId="api-choice"
  defaultValue="pulsar-admin"
diff --git a/docs/administration-isolation-bookie.md 
b/docs/administration-isolation-bookie.md
index 9351c4d4146..28fcc30c94f 100644
--- a/docs/administration-isolation-bookie.md
+++ b/docs/administration-isolation-bookie.md
@@ -49,7 +49,7 @@ Rack-aware placement policy enforces different data replicas 
to be placed in dif
 
 #### Qualified rack size of bookies
 
-When the available rack size of bookies can meet the requirements configured 
on a topic, the rack-aware placement policy can work well and you don’t need 
any extra configurations.
+When the available rack size of bookies can meet the requirements configured 
on a topic, the rack-aware placement policy can work well and you don't need 
any extra configurations.
 
 For example, the BookKeeper cluster has 4 racks and 13 bookie instances as 
shown in the following diagram. When a topic is configured with 
`EnsembleSize=3, WriteQuorum=3, AckQuorum=2`, the BookKeeper client chooses one 
bookie instance from three different racks to write data to, such as Bookie2, 
Bookie8, and Bookie12.
 
diff --git a/docs/administration-zk-bk.md b/docs/administration-zk-bk.md
index a492fe6cc09..91e3a156902 100644
--- a/docs/administration-zk-bk.md
+++ b/docs/administration-zk-bk.md
@@ -279,7 +279,7 @@ Flag | Description
 `-a`, `--bookkeeper-ack-quorum` | Ack quorum (Q<sub>a</sub>) size, Number of 
guaranteed copies (acks to wait for before a write is considered completed)     
                     | 0
 `-r`, `--ml-mark-delete-max-rate` | Throttling rate for mark-delete operations 
(0 means no throttle)                                                           
| 0
 
-Please notice that sticky reads enabled by `bookkeeperEnableStickyReads=true` 
aren’t used unless ensemble size (E) equals write quorum (Q<sub>w</sub>) size. 
Sticky reads improve the efficiency of the Bookkeeper read ahead cache when all 
reads for a single ledger are sent to a single bookie.
+Please notice that sticky reads enabled by `bookkeeperEnableStickyReads=true` 
are not used unless ensemble size (E) equals write quorum (Q<sub>w</sub>) size. 
Sticky reads improve the efficiency of the Bookkeeper read ahead cache when all 
reads for a single ledger are sent to a single bookie.
 
 Some rules for choosing the values:
 
diff --git a/docs/client-api-overview.md b/docs/client-api-overview.md
index 344133e11d2..b7f31116564 100644
--- a/docs/client-api-overview.md
+++ b/docs/client-api-overview.md
@@ -9,7 +9,7 @@ import Tabs from '@theme/Tabs';
 import TabItem from '@theme/TabItem';
 ````
 
-Pulsar client APIs allow you to create and configure producers, consumers, and 
readers; produce and consume messages; perform authentication and authorization 
tasks, and so on via programmable interfaces. They encapsulate and optimize 
Pulsar’s client-broker communication protocols and add additional features 
using Pulsar primitives. Pulsar exposes client APIs with language bindings for 
[Java](client-libraries-java.md), [C++](client-libraries-cpp.md), 
[Python](client-libraries-python.md), [...]
+Pulsar client APIs allow you to create and configure producers, consumers, and 
readers; produce and consume messages; perform authentication and authorization 
tasks, and so on via programmable interfaces. They encapsulate and optimize 
Pulsar's client-broker communication protocols and add additional features 
using Pulsar primitives. Pulsar exposes client APIs with language bindings for 
[Java](client-libraries-java.md), [C++](client-libraries-cpp.md), 
[Python](client-libraries-python.md), [...]
 
 ![Client APIs - Definition](/assets/client-api-definition.svg)
 
diff --git a/docs/concepts-cluster-level-failover.md 
b/docs/concepts-cluster-level-failover.md
index 4375913d4f9..8710c040dd7 100644
--- a/docs/concepts-cluster-level-failover.md
+++ b/docs/concepts-cluster-level-failover.md
@@ -30,7 +30,7 @@ Controlled cluster-level failover supports Pulsar clients 
switching from a prima
 </Tabs>
 ````
 
-Once the primary cluster functions again, Pulsar clients can switch back to 
the primary cluster. Most of the time users won’t even notice a thing. Users 
can keep using applications and services without interruptions or timeouts.
+Once the primary cluster functions again, Pulsar clients can switch back to 
the primary cluster. Most of the time users won't even notice a thing. Users 
can keep using applications and services without interruptions or timeouts.
 
 ### Why use cluster-level failover?
 
diff --git a/docs/concepts-messaging.md b/docs/concepts-messaging.md
index 764cf3aa757..9d8c4664dc1 100644
--- a/docs/concepts-messaging.md
+++ b/docs/concepts-messaging.md
@@ -31,7 +31,7 @@ Messages are the basic "unit" of Pulsar. The following table 
lists the component
 | Topic name           | The name of the topic that the message is published 
to.                                                                             
                                                                                
                                                                                
                                                                                
                                       |
 | Schema version       | The version number of the schema that the message is 
produced with.                                                                  
                                                                                
                                                                                
                                                                                
                                      |
 | Sequence ID          | Each Pulsar message belongs to an ordered sequence on 
its topic. The sequence ID of a message is initially assigned by its producer, 
indicating its order in that sequence, and can also be customized.<br 
/>Sequence ID can be used for message deduplication. If 
`brokerDeduplicationEnabled` is set to `true`, the sequence ID of each message 
is unique within a producer of a topic (non-partitioned) or a partition. |
-| Message ID           | The message ID of a message is assigned by bookies as 
soon as the message is persistently stored. Message ID indicates a message’s 
specific position in a ledger and is unique within a Pulsar cluster.            
                                                                                
                                                                                
                                        |
+| Message ID           | The message ID of a message is assigned by bookies as 
soon as the message is persistently stored. Message ID indicates a message's 
specific position in a ledger and is unique within a Pulsar cluster.            
                                                                                
                                                                                
                                        |
 | Publish time         | The timestamp of when the message is published. The 
timestamp is automatically applied by the producer.                             
                                                                                
                                                                                
                                                                                
                                       |
 | Event time           | An optional timestamp attached to a message by 
applications. For example, applications attach a timestamp on when the message 
is processed. If nothing is set to event time, the value is `0`.                
                                                                                
                                                                                
                                             |
 
@@ -821,7 +821,7 @@ The subscription mode indicates the cursor type.
 | Subscription mode | Description                                              
                                                                                
                                                                                
                                                     | Note                     
                                                                                
                                                           |
 
|:------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------|
 | `Durable`         | The cursor is durable, which retains messages and 
persists the current position. <br />If a broker restarts from a failure, it 
can recover the cursor from the persistent storage (BookKeeper), so that 
messages can continue to be consumed from the last consumed position. | 
`Durable` is the **default** subscription mode.                                 
                                                                                
    |
-| `NonDurable`      | The cursor is non-durable. <br />Once a broker stops, 
the cursor is lost and can never be recovered, so that messages **can not** 
continue to be consumed from the last consumed position.                        
                                                            | Reader’s 
subscription mode is `NonDurable` in nature and it does not prevent data in a 
topic from being deleted. Reader’s subscription mode **can not** be changed. |
+| `NonDurable`      | The cursor is non-durable. <br />Once a broker stops, 
the cursor is lost and can never be recovered, so that messages **can not** 
continue to be consumed from the last consumed position.                        
                                                            | Reader's 
subscription mode is `NonDurable` in nature and it does not prevent data in a 
topic from being deleted. Reader's subscription mode **can not** be changed. |
 
 A [subscription](#subscriptions) can have one or more consumers. When a 
consumer subscribes to a topic, it must specify the subscription name. A 
durable subscription and a non-durable subscription can have the same name, 
they are independent of each other. If a consumer specifies a subscription that 
does not exist before, the subscription is automatically created.
 
@@ -831,7 +831,7 @@ By default, messages of a topic without any durable 
subscriptions are marked as
 
 #### How to use
 
-After a consumer is created, the default subscription mode of the consumer is 
`Durable`. You can change the subscription mode to `NonDurable` by making 
changes to the consumer’s configuration.
+After a consumer is created, the default subscription mode of the consumer is 
`Durable`. You can change the subscription mode to `NonDurable` by making 
changes to the consumer's configuration.
 
 ````mdx-code-block
 <Tabs
diff --git a/docs/concepts-replication.md b/docs/concepts-replication.md
index 64120d73e1e..b47c037719f 100644
--- a/docs/concepts-replication.md
+++ b/docs/concepts-replication.md
@@ -6,7 +6,7 @@ sidebar_label: "Geo Replication"
 
 Regardless of industries, when an unforeseen event occurs and brings 
day-to-day operations to a halt, an organization needs a well-prepared disaster 
recovery plan to quickly restore service to clients. However, a disaster 
recovery plan usually requires a multi-datacenter deployment with 
geographically dispersed data centers. Such a multi-datacenter deployment 
requires a geo-replication mechanism to provide additional redundancy in case a 
data center fails.
 
-Pulsar's geo-replication mechanism is typically used for disaster recovery, 
enabling the replication of persistently stored message data across multiple 
data centers. For instance, your application is publishing data in one region 
and you would like to process it for consumption in other regions. With 
Pulsar’s geo-replication mechanism, messages can be produced and consumed in 
different geo-locations. 
+Pulsar's geo-replication mechanism is typically used for disaster recovery, 
enabling the replication of persistently stored message data across multiple 
data centers. For instance, your application is publishing data in one region 
and you would like to process it for consumption in other regions. With 
Pulsar's geo-replication mechanism, messages can be produced and consumed in 
different geo-locations. 
 
 The diagram below illustrates the process of 
[geo-replication](administration-geo.md). Whenever three producers (P1, P2 and 
P3) respectively publish messages to the T1 topic in three clusters, those 
messages are instantly replicated across clusters. Once the messages are 
replicated, two consumers (C1 and C2) can consume those messages from their 
clusters.
 
@@ -24,7 +24,7 @@ An asynchronous geo-replicated cluster is composed of 
multiple physical clusters
 
 In normal cases, when there are no connectivity issues, messages are 
replicated immediately, at the same time as they are dispatched to local 
consumers. Typically, end-to-end delivery latency is defined by the network 
round-trip time (RTT) between the data centers. Applications can create 
producers and consumers in any of the clusters, even when the remote clusters 
are not reachable (for example, during a network partition).
 
-Asynchronous geo-replication provides lower latency but may result in weaker 
consistency guarantees due to the potential replication lag that some data 
hasn’t been replicated. 
+Asynchronous geo-replication provides lower latency but may result in weaker 
consistency guarantees due to the potential replication lag that some data 
hasn't been replicated. 
 
 ### Synchronous geo-replication via BookKeeper
 
diff --git a/docs/cookbooks-retention-expiry.md 
b/docs/cookbooks-retention-expiry.md
index 5d75fb58c72..943c73d2bb5 100644
--- a/docs/cookbooks-retention-expiry.md
+++ b/docs/cookbooks-retention-expiry.md
@@ -455,7 +455,7 @@ admin.namespaces().removeNamespaceMessageTTL(namespace)
 ## Delete messages from namespaces
 
 When it comes to the physical storage size, message expiry and retention are 
just like two sides of the same coin.
-* The backlog quota and TTL parameters prevent disk size from growing 
indefinitely, as Pulsar’s default behavior is to persist unacknowledged 
messages. 
+* The backlog quota and TTL parameters prevent disk size from growing 
indefinitely, as Pulsar's default behavior is to persist unacknowledged 
messages. 
 * The retention policy allocates storage space to accommodate the messages 
that are supposed to be deleted by Pulsar by default.
 
 In conclusion, the size of your physical storage should accommodate the sum of 
the backlog quota and the retention size. 
diff --git a/docs/developing-binary-protocol.md 
b/docs/developing-binary-protocol.md
index 16459a2a1bd..bbbc70f9f27 100644
--- a/docs/developing-binary-protocol.md
+++ b/docs/developing-binary-protocol.md
@@ -435,7 +435,7 @@ Fields:
 
 ##### Command AckResponse
 
-An `AckResponse` is the broker’s response to acknowledge a request sent by the 
client. It contains the `consumer_id` sent in the request.
+An `AckResponse` is the broker's response to acknowledge a request sent by the 
client. It contains the `consumer_id` sent in the request.
 If a transaction is used, it contains both the Transaction ID and the Request 
ID that are sent in the request.
 The client finishes the specific request according to the Request ID.
 If the `error` field is set, it indicates that the request has failed.
diff --git a/docs/functions-concepts.md b/docs/functions-concepts.md
index 44a6451bd68..94e07b298a8 100644
--- a/docs/functions-concepts.md
+++ b/docs/functions-concepts.md
@@ -73,7 +73,7 @@ Pulsar provides three different messaging delivery semantics 
that you can apply
 
 | Delivery semantics | Description | Adopted subscription type |
 |--------------------|-------------|---------------------------|
-| **At-most-once** delivery | Each message sent to a function is processed at 
its best effort. There’s no guarantee that the message will be processed or 
not. <br /><br /> When you select this semantic, the `autoAck` configuration 
must be set to `true`, otherwise the startup will fail (the `autoAck` 
configuration will be deprecated in future releases). <br /><br /> **Ack time 
node**: Before function processing. | Shared |
+| **At-most-once** delivery | Each message sent to a function is processed at 
its best effort. There is no guarantee that the message will be processed or 
not. <br /><br /> When you select this semantic, the `autoAck` configuration 
must be set to `true`, otherwise the startup will fail (the `autoAck` 
configuration will be deprecated in future releases). <br /><br /> **Ack time 
node**: Before function processing. | Shared |
 | **At-least-once** delivery (default) | Each message sent to a function can 
be processed more than once (in case of a processing failure or redelivery).<br 
/><br />If you create a function without specifying the 
`--processing-guarantees` flag, the function provides `at-least-once` delivery 
guarantee. <br /><br /> **Ack time node**: After sending a message to output. | 
Shared |
 | **Effectively-once** delivery | Each message sent to a function can be 
processed more than once but it has only one output. Duplicated messages are 
ignored.<br /><br />`Effectively once` is achieved on top of `at-least-once` 
processing and guaranteed server-side deduplication. This means a state update 
can happen twice, but the same state update is only applied once, the other 
duplicated state update is discarded on the server-side. <br /><br /> **Ack 
time node**: After sending a messa [...]
 | **Manual** delivery | When you select this semantic, the framework does not 
perform any ack operations, and you need to call the method 
`context.getCurrentRecord().ack()` inside a function to manually perform the 
ack operation. <br /><br /> **Ack time node**: User-defined within function 
methods. | Shared |
@@ -161,7 +161,7 @@ Both trigger policy and eviction policy are driven by 
either time or count.
 :::tip
 
 Both processing time and event time are supported.
- * Processing time is defined based on the wall time when the function 
instance builds and processes a window. The judging of window completeness is 
straightforward and you don’t have to worry about data arrival disorder. 
+ * Processing time is defined based on the wall time when the function 
instance builds and processes a window. The judging of window completeness is 
straightforward and you don't have to worry about data arrival disorder. 
  * Event time is defined based on the timestamps that come with the event 
record. It guarantees event time correctness but also offers more data 
buffering and a limited completeness guarantee.
    
 :::
diff --git a/docs/functions-quickstart.md b/docs/functions-quickstart.md
index a4ff1605c49..84625610a5a 100644
--- a/docs/functions-quickstart.md
+++ b/docs/functions-quickstart.md
@@ -120,7 +120,7 @@ Before starting functions, you need to [start 
Pulsar](#start-standalone-pulsar)
 
    :::tip
 
-   You can see both the `example-function-config.yaml` and `api-examples.jar` 
files under the `examples` folder of the Pulsar’s directory on your local 
machine.
+   You can see both the `example-function-config.yaml` and `api-examples.jar` 
files under the `examples` folder of the Pulsar's directory on your local 
machine.
 
    This example function will add a `!` at the end of every message.
 
@@ -565,7 +565,7 @@ Before starting window functions, you need to [start 
Pulsar](#start-standalone-p
    }
    ```
 
-3. In the same terminal window as step 1, verify the function’s status.
+3. In the same terminal window as step 1, verify the function's status.
 
    ```bash
    bin/pulsar-admin functions status \
diff --git a/docs/io-cdc-debezium.md b/docs/io-cdc-debezium.md
index 70010e478a3..18ccbfd4658 100644
--- a/docs/io-cdc-debezium.md
+++ b/docs/io-cdc-debezium.md
@@ -23,7 +23,7 @@ The configuration of the Debezium source connector has the 
following properties.
 | `database.port` | true | null | The port number of a database server.|
 | `database.user` | true | null | The name of a database user that has the 
required privileges. |
 | `database.password` | true | null | The password for a database user that 
has the required privileges. |
-| `database.server.id` | true | null | The connector’s identifier that must be 
unique within a database cluster and similar to the database’s server-id 
configuration property. |
+| `database.server.id` | true | null | The connector's identifier that must be 
unique within a database cluster and similar to the database's server-id 
configuration property. |
 | `database.server.name` | true | null | The logical name of a database 
server/cluster, which forms a namespace and is used in all the names of Kafka 
topics to which the connector writes, the Kafka Connect schema names, and the 
namespaces of the corresponding Avro schema when the Avro Connector is used. |
 | `database.whitelist` | false | null | A list of all databases hosted by this 
server that is monitored by the connector.<br /><br /> This is optional, and 
there are other properties for listing databases and tables to include or 
exclude from monitoring. |
 | `key.converter` | true | null | The converter provided by Kafka Connect to 
convert the record key. |
diff --git a/docs/io-cli.md b/docs/io-cli.md
index c052340d980..da87cb1cd92 100644
--- a/docs/io-cli.md
+++ b/docs/io-cli.md
@@ -255,16 +255,16 @@ pulsar-admin sources localrun options
 |`--destination-topic-name`|The Pulsar topic to which data is sent.
 |`--disk`|The disk (in bytes) that needs to be allocated per source instance 
(applicable only to the Docker runtime).|
 |`--hostname-verification-enabled`|Enable hostname verification.<br 
/>**Default value: false**.
-|`--name`|The source’s name.|
-|`--namespace`|The source’s namespace.|
-|`--parallelism`|The source’s parallelism factor, that is, the number of 
source instances to run).|
+|`--name`|The source's name.|
+|`--namespace`|The source's namespace.|
+|`--parallelism`|The source's parallelism factor, that is, the number of 
source instances to run).|
 |`--processing-guarantees` | The processing guarantees (also named as delivery 
semantics) applied to the source. A source connector receives messages from 
external system and writes messages to a Pulsar topic. The 
`--processing-guarantees` is used to ensure the processing guarantees for 
writing messages to the Pulsar topic. <br />The available values are 
ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE.
 |`--ram`|The RAM (in bytes) that needs to be allocated per source instance 
(applicable only to the Docker runtime).|
 | `-st`, `--schema-type` | The schema type.<br /> Either a built-in schema 
(for example, AVRO and JSON) or custom schema class name to be used to encode 
messages emitted from source.
 |`--source-config`|Source config key/values.
-|`--source-config-file`|The path to a YAML config file specifying the source’s 
configuration.
+|`--source-config-file`|The path to a YAML config file specifying the source's 
configuration.
 |`--source-type`|The source's connector provider.
-|`--tenant`|The source’s tenant.
+|`--tenant`|The source's tenant.
 |`--tls-allow-insecure`|Allow insecure tls connection.<br />**Default value: 
false**.
 |`--tls-trust-cert-path`|The tls trust cert file path.
 |`--use-tls`|Use tls connection.<br />**Default value: false**.
@@ -546,17 +546,17 @@ pulsar-admin sinks localrun options
 |`--disk`|The disk (in bytes) that needs to be allocated per sink instance 
(applicable only to the Docker runtime).|
 |`--hostname-verification-enabled`|Enable hostname verification.<br 
/>**Default value: false**.
 | `-i`, `--inputs` | The sink's input topic or topics (multiple topics can be 
specified as a comma-separated list).
-|`--name`|The sink’s name.|
-|`--namespace`|The sink’s namespace.|
-|`--parallelism`|The sink’s parallelism factor, that is, the number of sink 
instances to run).|
+|`--name`|The sink's name.|
+|`--namespace`|The sink's namespace.|
+|`--parallelism`|The sink's parallelism factor, that is, the number of sink 
instances to run).|
 |`--processing-guarantees`|The processing guarantees (also known as delivery 
semantics) applied to the sink. The `--processing-guarantees` implementation in 
Pulsar also relies on sink implementation. <br />The available values are 
ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE.
 |`--ram`|The RAM (in bytes) that needs to be allocated per sink instance 
(applicable only to the Docker runtime).|
 |`--retain-ordering` | Sink consumes and sinks messages in order.
 |`--sink-config`|sink config key/values.
-|`--sink-config-file`|The path to a YAML config file specifying the sink’s 
configuration.
+|`--sink-config-file`|The path to a YAML config file specifying the sink's 
configuration.
 |`--sink-type`|The sink's connector provider.
 |`--subs-name` | Pulsar source subscription name if user wants a specific 
subscription-name for input-topic consumer.
-|`--tenant`|The sink’s tenant.
+|`--tenant`|The sink's tenant.
 | `--timeout-ms` | The message timeout in milliseconds.
 | `--negative-ack-redelivery-delay-ms` | The negatively-acknowledged message 
redelivery delay in milliseconds. |
 |`--tls-allow-insecure`|Allow insecure tls connection.<br />**Default value: 
false**.
diff --git a/docs/io-debezium-source.md b/docs/io-debezium-source.md
index ff2ad926ffe..86d9b1f69a1 100644
--- a/docs/io-debezium-source.md
+++ b/docs/io-debezium-source.md
@@ -23,7 +23,7 @@ The configuration of the Debezium source connector has the 
following properties.
 | `database.port` | true | null | The port number of a database server.|
 | `database.user` | true | null | The name of a database user that has the 
required privileges. |
 | `database.password` | true | null | The password for a database user that 
has the required privileges. |
-| `database.server.id` | true | null | The connector’s identifier that must be 
unique within a database cluster and similar to the database’s server-id 
configuration property. |
+| `database.server.id` | true | null | The connector's identifier that must be 
unique within a database cluster and similar to the database's server-id 
configuration property. |
 | `database.server.name` | true | null | The logical name of a database 
server/cluster, which forms a namespace and it is used in all the names of 
Kafka topics to which the connector writes, the Kafka Connect schema names, and 
the namespaces of the corresponding Avro schema when the Avro Connector is 
used. |
 | `database.whitelist` | false | null | A list of all databases hosted by this 
server which is monitored by the  connector.<br /><br /> This is optional, and 
there are other properties for listing databases and tables to include or 
exclude from monitoring. |
 | `key.converter` | true | null | The converter provided by Kafka Connect to 
convert record key. |
diff --git a/docs/reference-pulsar-admin.md b/docs/reference-pulsar-admin.md
index 429e89bf1fb..1e497951832 100644
--- a/docs/reference-pulsar-admin.md
+++ b/docs/reference-pulsar-admin.md
@@ -515,7 +515,7 @@ Options
 |`--namespace`|The function's namespace||
 |`--output`|The function's output topic (If none is specified, no output is 
written)||
 |`--output-serde-classname`|The SerDe class to be used for messages output by 
the function||
-|`--parallelism`|The function’s parallelism factor, i.e. the number of 
instances of the function to run|1|
+|`--parallelism`|The function's parallelism factor, i.e. the number of 
instances of the function to run|1|
 |`--processing-guarantees`|The processing guarantees (aka delivery semantics) 
applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, 
EFFECTIVELY_ONCE]|ATLEAST_ONCE|
 |`--py`|Path to the main Python file/Python Wheel file for the function (if 
the function is written in Python). It also supports URL path [http/https/file 
(file protocol assumes that file already exists on worker host)/function 
(package URL from packages management service)] from which worker can download 
the package.||
 |`--go`|Path to the main Go executable binary for the function (if the 
function is written in Go). It also supports URL path [http/https/file (file 
protocol assumes that file already exists on worker host)/function (package URL 
from packages management service)] from which worker can download the package.||
@@ -523,7 +523,7 @@ Options
 |`--sliding-interval-count`|The number of messages after which the window 
slides||
 |`--sliding-interval-duration-ms`|The time duration after which the window 
slides||
 |`--state-storage-service-url`|The URL for the state storage service. By 
default, it it set to the service URL of the Apache BookKeeper. This service 
URL must be added manually when the Pulsar Function runs locally. ||
-|`--tenant`|The function’s tenant||
+|`--tenant`|The function's tenant||
 |`--topics-pattern`|The topic pattern to consume from list of topics under a 
namespace that match the pattern. [--input] and [--topic-pattern] are mutually 
exclusive. Add SerDe class name for a pattern in --custom-serde-inputs 
(supported for java fun only)||
 |`--user-config`|User-defined config key/values||
 |`--window-length-count`|The number of messages per window||
@@ -566,17 +566,17 @@ Options
 |`--log-topic`|The topic to which the function's logs are produced||
 |`--jar`|Path to the jar file for the function (if the function is written in 
Java). It also supports URL path [http/https/file (file protocol assumes that 
file already exists on worker host)/function (package URL from packages 
management service)] from which worker can download the package.||
 |`--name`|The function's name||
-|`--namespace`|The function’s namespace||
+|`--namespace`|The function's namespace||
 |`--output`|The function's output topic (If none is specified, no output is 
written)||
 |`--output-serde-classname`|The SerDe class to be used for messages output by 
the function||
-|`--parallelism`|The function’s parallelism factor, i.e. the number of 
instances of the function to run|1|
+|`--parallelism`|The function's parallelism factor, i.e. the number of 
instances of the function to run|1|
 |`--processing-guarantees`|The processing guarantees (aka delivery semantics) 
applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, 
EFFECTIVELY_ONCE]|ATLEAST_ONCE|
 |`--py`|Path to the main Python file/Python Wheel file for the function (if 
the function is written in Python). It also supports URL path [http/https/file 
(file protocol assumes that file already exists on worker host)/function 
(package URL from packages management service)] from which worker can download 
the package.||
 |`--go`|Path to the main Go executable binary for the function (if the 
function is written in Go). It also supports URL path [http/https/file (file 
protocol assumes that file already exists on worker host)/function (package URL 
from packages management service)] from which worker can download the package.||
 |`--schema-type`|The built-in schema type or custom schema class name to be 
used for messages output by the function||
 |`--sliding-interval-count`|The number of messages after which the window 
slides||
 |`--sliding-interval-duration-ms`|The time duration after which the window 
slides||
-|`--tenant`|The function’s tenant||
+|`--tenant`|The function's tenant||
 |`--topics-pattern`|The topic pattern to consume from list of topics under a 
namespace that match the pattern. [--input] and [--topic-pattern] are mutually 
exclusive. Add SerDe class name for a pattern in --custom-serde-inputs 
(supported for java fun only)||
 |`--user-config`|User-defined config key/values||
 |`--window-length-count`|The number of messages per window||
@@ -635,17 +635,17 @@ Options
 |`--log-topic`|The topic to which the function's logs are produced||
 |`--jar`|Path to the jar file for the function (if the function is written in 
Java). It also supports URL path [http/https/file (file protocol assumes that 
file already exists on worker host)/function (package URL from packages 
management service)] from which worker can download the package.||
 |`--name`|The function's name||
-|`--namespace`|The function’s namespace||
+|`--namespace`|The function's namespace||
 |`--output`|The function's output topic (If none is specified, no output is 
written)||
 |`--output-serde-classname`|The SerDe class to be used for messages output by 
the function||
-|`--parallelism`|The function’s parallelism factor, i.e. the number of 
instances of the function to run|1|
+|`--parallelism`|The function's parallelism factor, i.e. the number of 
instances of the function to run|1|
 |`--processing-guarantees`|The processing guarantees (aka delivery semantics) 
applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, 
EFFECTIVELY_ONCE]|ATLEAST_ONCE|
 |`--py`|Path to the main Python file/Python Wheel file for the function (if 
the function is written in Python). It also supports URL path [http/https/file 
(file protocol assumes that file already exists on worker host)/function 
(package URL from packages management service)] from which worker can download 
the package.||
 |`--go`|Path to the main Go executable binary for the function (if the 
function is written in Go). It also supports URL path [http/https/file (file 
protocol assumes that file already exists on worker host)/function (package URL 
from packages management service)] from which worker can download the package.||
 |`--schema-type`|The built-in schema type or custom schema class name to be 
used for messages output by the function||
 |`--sliding-interval-count`|The number of messages after which the window 
slides||
 |`--sliding-interval-duration-ms`|The time duration after which the window 
slides||
-|`--tenant`|The function’s tenant||
+|`--tenant`|The function's tenant||
 |`--topics-pattern`|The topic pattern to consume from list of topics under a 
namespace that match the pattern. [--input] and [--topic-pattern] are mutually 
exclusive. Add SerDe class name for a pattern in --custom-serde-inputs 
(supported for java fun only)||
 |`--user-config`|User-defined config key/values||
 |`--window-length-count`|The number of messages per window||
diff --git a/docs/schema-overview.md b/docs/schema-overview.md
index 53aa79a21c8..583dfb85034 100644
--- a/docs/schema-overview.md
+++ b/docs/schema-overview.md
@@ -17,7 +17,7 @@ Pulsar messages are stored as unstructured byte arrays and 
the data structure (a
 
 Pulsar schema is the metadata that defines how to translate the raw message 
bytes into a more formal structure type, serving as a protocol between the 
applications that generate messages and the applications that consume them. It 
serializes data into raw bytes before they are published to a topic and 
deserializes the raw bytes before they are delivered to consumers.
 
-Pulsar uses a schema registry as a central repository to store the registered 
schema information, which enables producers/consumers to coordinate the schema 
of a topic’s messages through brokers.
+Pulsar uses a schema registry as a central repository to store the registered 
schema information, which enables producers/consumers to coordinate the schema 
of a topic's messages through brokers.
 
 ![Pulsar schema](/assets/schema.svg)
 
@@ -58,7 +58,7 @@ This diagram illustrates how Pulsar schema works on the 
Producer side.
    * Otherwise, go to step 4.
 
 4. The broker checks whether the schema can be auto-updated. 
-   * If it’s not allowed to be auto-updated, then the schema cannot be 
registered, and the broker rejects the producer.
+   * If it's not allowed to be auto-updated, then the schema cannot be 
registered, and the broker rejects the producer.
    * Otherwise, go to step 5.
 
 5. The broker performs the [schema compatibility 
check](schema-understand.md#schema-compatibility-check) defined for the topic.
diff --git a/docs/schema-understand.md b/docs/schema-understand.md
index 3aa27c0056b..e6b649c0983 100644
--- a/docs/schema-understand.md
+++ b/docs/schema-understand.md
@@ -250,7 +250,7 @@ Schema validation enforcement enables brokers to reject 
producers/consumers with
 
 By default, schema validation enforcement is only **disabled** 
(`isSchemaValidationEnforced`=`false`) for producers, which means:
 * A producer without a schema can produce any messages to a topic with 
schemas, which may result in producing trash data to the topic. 
-* Clients that don’t support schema are allowed to produce messages to a topic 
with schemas.
+* Clients that don't support schema are allowed to produce messages to a topic 
with schemas.
 
 For how to enable schema validation enforcement, see [Manage schema 
validation](admin-api-schemas.md#manage-schema-validation).
 
@@ -343,9 +343,9 @@ By default, schema `AutoUpdate` is enabled. When a schema 
passes the schema comp
 
 For a producer, the `AutoUpdate` happens in the following cases:
 
-* If a **topic doesn’t have a schema** (meaning the data is in raw bytes), 
Pulsar registers the schema automatically.
+* If a **topic doesn't have a schema** (meaning the data is in raw bytes), 
Pulsar registers the schema automatically.
 
-* If a **topic has a schema** and the **producer doesn’t carry any schema** 
(meaning it produces raw bytes):
+* If a **topic has a schema** and the **producer doesn't carry any schema** 
(meaning it produces raw bytes):
 
     * If [schema validation enforcement](#schema-validation-enforcement) is 
**disabled** (`schemaValidationEnforced`=`false`) in the namespace that the 
topic belongs to, the producer is allowed to connect to the topic and produce 
data. 
   
diff --git a/docs/security-kerberos.md b/docs/security-kerberos.md
index 150173bebae..3d15e3ef399 100644
--- a/docs/security-kerberos.md
+++ b/docs/security-kerberos.md
@@ -107,7 +107,7 @@ If your machines configured with Kerberos already have a 
system-wide configurati
 
 :::
 
-The content of `krb5.conf` file indicates the default Realm and KDC 
information. See [JDK’s Kerberos 
Requirements](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html)
 for more details.
+The content of `krb5.conf` file indicates the default Realm and KDC 
information. See [JDK's Kerberos 
Requirements](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html)
 for more details.
 
 To specify the path to the `krb5.conf` file for brokers, enter the command 
below. 
 
diff --git a/docs/security-tls-transport.md b/docs/security-tls-transport.md
index cbe586d6f8b..fa39d1155ad 100644
--- a/docs/security-tls-transport.md
+++ b/docs/security-tls-transport.md
@@ -397,7 +397,7 @@ By default, Pulsar uses 
[Conscrypt](https://github.com/google/conscrypt) for bot
 
 ### Generate JKS certificate
 
-You can use Java’s `keytool` utility to generate the key and certificate for 
each machine in the cluster. 
+You can use Java's `keytool` utility to generate the key and certificate for 
each machine in the cluster. 
 
 ```bash
 DAYS=365
diff --git a/docs/txn-how.md b/docs/txn-how.md
index 9fe8658a172..8f67ea5cccc 100644
--- a/docs/txn-how.md
+++ b/docs/txn-how.md
@@ -60,7 +60,7 @@ Before introducing the transaction in Pulsar, a producer is 
created and then mes
 
 ![](/assets/txn-3.png)
 
-Let’s walk through the steps for _beginning a transaction_.
+Let's walk through the steps for _beginning a transaction_.
 
 | Step  |  Description  | 
 | --- | --- |
@@ -75,7 +75,7 @@ In this stage, the Pulsar client enters a transaction loop, 
repeating the `consu
 
 ![](/assets/txn-4.png)
 
-Let’s walk through the steps for _publishing messages with a transaction_.
+Let's walk through the steps for _publishing messages with a transaction_.
 
 | Step  |  Description  | 
 | --- | --- |
@@ -92,7 +92,7 @@ In this phase, the Pulsar client sends a request to the 
transaction coordinator
 
 ![](/assets/txn-5.png)
 
-Let’s walk through the steps for _acknowledging messages with a transaction_.
+Let's walk through the steps for _acknowledging messages with a transaction_.
 
 | Step  |  Description  | 
 | --- | --- |
@@ -113,7 +113,7 @@ When the Pulsar client finishes a transaction, it issues an 
end transaction requ
 
 ![](/assets/txn-6.png)
 
-Let’s walk through the steps for _ending the transaction_.
+Let's walk through the steps for _ending the transaction_.
 
 | Step  |  Description  | 
 | --- | --- |
@@ -127,7 +127,7 @@ The transaction coordinator starts the process of 
committing or aborting message
 
 ![](/assets/txn-7.png)
 
-Let’s walk through the steps for _finalizing a transaction_.
+Let's walk through the steps for _finalizing a transaction_.
 
 | Step  |  Description  | 
 | --- | --- |
@@ -141,7 +141,7 @@ The transaction coordinator writes the final transaction 
status to the transacti
 
 ![](/assets/txn-8.png)
 
-Let’s walk through the steps for _marking a transaction as COMMITTED or 
ABORTED_.
+Let's walk through the steps for _marking a transaction as COMMITTED or 
ABORTED_.
 
 | Step  |  Description  | 
 | --- | --- |
diff --git a/docs/txn-use.md b/docs/txn-use.md
index 59ccdf0f497..0ff82cb84ea 100644
--- a/docs/txn-use.md
+++ b/docs/txn-use.md
@@ -69,7 +69,7 @@ Now you can start using the transaction API to send and 
receive messages. Below
 
 ![](/assets/txn-9.png)
 
-Let’s walk through this example step by step.
+Let's walk through this example step by step.
 
 | Step  |  Description  | 
 | --- | --- |


Reply via email to