diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index fdbbc0abdf..2d82cddc7e 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -728,7 +728,7 @@ include_dir 'conf.d'
       <listitem>
         <para>
           The maximum number of client sessions that can be handled by
-          one connection proxy when session pooling is switched on.
+          one connection proxy when session pooling is enabled.
           This parameter does not add any memory or CPU overhead, so
           specifying a large <varname>max_sessions</varname> value
           does not affect performance.
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
index 8486ce1e8d..a4b27209ef 100644
--- a/doc/src/sgml/connpool.sgml
+++ b/doc/src/sgml/connpool.sgml
@@ -9,22 +9,22 @@
 
   <para>
     <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
-    For large number of clients such model can cause consumption of large number of system
-    resources and lead to significant performance degradation, especially at computers with large
-    number of CPU cores. The reason is high contention between backends for postgres resources.
-    Also size of many Postgres internal data structures are proportional to the number of
-    active backends as well as complexity of algorithms for this data structures.
+    For large number of clients this model can consume a large number of system
+    resources and lead to significant performance degradation, especially on computers with large
+    number of CPU cores. The reason is high contention between backends for Postgres resources.
+    Also, the size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for the data structures.
   </para>
 
   <para>
-    This is why most of production Postgres installation are using some kind of connection pooling:
-    pgbouncer, J2EE, odyssey,... But external connection pooler requires additional efforts for installation,
+    This is why many production Postgres installation are using some kind of connection pooling, such as 
+    pgbouncer, J2EE, and odyssey.  Using an external connection pooler requires additional efforts for installation,
     configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
-    single-threaded and so can be bottleneck for highload system, so multiple instances of pgbouncer have to be launched.
+    single-threaded and so can a be bottleneck on high load systems, so multiple instances of pgbouncer have to be launched.
   </para>
 
   <para>
-    Starting from version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    Starting with version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
     This chapter describes architecture and usage of built-in connection pooler.
   </para>
 
@@ -58,8 +58,8 @@
   </para>
 
   <para>
-    Built-in connection pooler is accepted connections on separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
-    If client is connected to postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    Built-in connection pooler accepts connections on a separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to Postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
     with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
     launch new worker backends. It means that to enable connection pooler Postgres should be configured
     to accept local connections (<literal>pg_hba.conf</literal> file).
@@ -73,8 +73,8 @@
 
   <para>
     Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
-    Right now sessions and bounded to proxy and can not migrate between them.
-    To provide uniform load balancing of proxies, postmaster is using one of three scheduling policies:
+    Right now sessions are bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster uses one of three scheduling policies:
     <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
     In the last case postmaster will choose proxy with smallest number of already attached clients, with
     extra weight added to SSL connections (which consume more CPU).
@@ -92,14 +92,14 @@
   </para>
 
   <para>
-    <varname>connection_proxies</varname> specifies number of connection proxy processes which will be spawned.
+    <varname>connection_proxies</varname> specifies the number of connection proxy processes to be spawned.
     Default value is zero, so connection pooling is disabled by default.
   </para>
 
   <para>
-    <varname>session_pool_size</varname> specifies maximal number of backends per connection pool. Maximal number of laucnhed non-dedicated backends in pooling mode is limited by
+    <varname>session_pool_size</varname> specifies the maximal number of backends per connection pool. Maximal number of launched non-dedicated backends in pooling mode is limited by
     <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
-    If number of backends is too small, then server will not be able to utilize all system resources.
+    If the number of backends is too small, the server will not be able to utilize all system resources.
     But too large value can cause degradation of performance because of large snapshots and lock contention.
   </para>
 
@@ -112,7 +112,7 @@
   <para>
     Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
     Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
-    But it is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is still necessary to have a port for direct connections to the database (dedicated backends).
     It is needed for connection pooler itself to launch worker backends.
   </para>
 
@@ -129,8 +129,8 @@
   </para>
 
   <para>
-    As far as pooled backends are not terminated on client exist, it will not
-    be possible to drop database to which them are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database.
+    Because pooled backends are not terminated on client exit, it will not
+    be possible to drop database to which they are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database.
   </para>
 
  </sect1>
@@ -139,8 +139,8 @@
   <title>Built-in Connection Pooler Pros and Cons</title>
 
   <para>
-    Unlike pgbouncer and other external connection poolers, built-in connection pooler doesn't require installation and configuration of some other components.
-    Also it doesn't introduce any limitations for clients: existed clients can work through proxy and don't notice any difference.
+    Unlike pgbouncer and other external connection poolers, the built-in connection pooler doesn't require installation and configuration of some other components.
+    It also does not introduce any limitations for clients: existing clients can work through proxy and don't notice any difference.
     If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
     connection pooling but it will correctly work. This is the main difference with pgbouncer,
     which may cause incorrect behavior of client application in case of using other session level pooling policy.
@@ -156,16 +156,15 @@
   </para>
 
   <para>
-    Redirecting connections through connection proxy definitely have negative effect on total system performance and especially latency.
-    Overhead of connection proxing depends on too many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
-    Pgbench benchmark in select-only mode shows almost two times worser performance for local connections through connection pooler comparing with direct local connections when
-    number of connections is small enough (10). For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+    Redirecting connections through the connection proxy definitely has a negative effect on total system performance, especially latency.
+    The overhead of the connection proxy depends on many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    With a small number of connections (10), pgbench benchmark in select-only mode shows almost two times worse performance for local connections through connection pooler compared with direct local connections. For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
   </para>
 
   <para>
     Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
     other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
-    state for long enough time. And such backend can not be rescheduled for some another session.
+    state for long enough time. An <emphasis>idle-in-transaction</emphasis> backend can not be rescheduled for another session.
     The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
   </para>
 
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 029f0dc4e3..ee6e2bdeb6 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,7 +109,6 @@
   &mvcc;
   &perform;
   &parallel;
-  &connpool;
 
  </part>
 
@@ -159,6 +158,7 @@
   &maintenance;
   &backup;
   &high-availability;
+  &connpool;
   &monitoring;
   &diskusage;
   &wal;
