diff --git a/doc/src/sgml/hash.sgml b/doc/src/sgml/hash.sgml
index 6830b384bf..b4545a92ef 100644
--- a/doc/src/sgml/hash.sgml
+++ b/doc/src/sgml/hash.sgml
@@ -12,8 +12,9 @@
  <title>Overview</title>
 
  <para>
-  PostgreSQL includes an implementation of persistent on-disk hash indexes,
-  which are now fully crash recoverable. Any data type can be indexed by a
+  <productname>PostgreSQL</productname>
+  includes an implementation of persistent on-disk hash indexes,
+  which are fully crash recoverable. Any data type can be indexed by a
   hash index, including data types that do not have a well-defined linear
   ordering. Hash indexes store only the hash value of the data being
   indexed, thus there are no restrictions on the size of the data column
@@ -26,23 +27,25 @@
  </para>
 
  <para>
-  Hash indexes support only the = operator, so WHERE clauses that specify
+  Hash indexes support only the <literal>=</literal> operator,
+  so WHERE clauses that specify
   range operations will not be able to take advantage of hash indexes.
  </para>
 
  <para>
   Each hash index tuple stores just the 4-byte hash value, not the actual
   column value. As a result, hash indexes may be much smaller than B-trees
-  when indexing longer data items such as UUIDs, URLs etc.. The absence of
+  when indexing longer data items such as UUIDs, URLs, etc.. The absence of
   the column value also makes all hash index scans lossy. Hash indexes may
   take part in bitmap index scans and backward scans.
  </para>
 
  <para>
-  Hash indexes are best optimized for SELECTs and UPDATEs using equality
-  scans on larger tables. In a B-tree index, searches must descend
+  Hash indexes are best optimized for SELECT and UPDATE-heavy workloads
+  that use equality scans on larger tables.
+  In a B-tree index, searches must descend
   through the tree until the leaf page is found. In tables with millions
-  of rows this descent can increase access time to data. The equivalent
+  of rows, this descent can increase access time to data. The equivalent
   of a leaf page in a hash index is referred to as a bucket page. In
   contrast, a hash index allows accessing the bucket pages directly,
   thereby potentially reducing index access time in larger tables. This
@@ -56,7 +59,7 @@
   values are evenly distributed. When inserts mean that the bucket page
   becomes full, additional overflow pages are chained to that specific
   bucket page, locally expanding the storage for index tuples that match
-  that hash value. When scanning a hash bucket during queries we need to
+  that hash value. When scanning a hash bucket during queries, we need to
   scan through all of the overflow pages. Thus an unbalanced hash index
   might actually be worse than a B-tree in terms of number of block
   accesses required, for some data.
@@ -65,18 +68,19 @@
  <para>
   As a result of the overflow cases, we can say that hash indexes are
   most suitable for unique, nearly unique data or data with a low number
-  of rows per hash bucket will be suitable for hash indexes. One
-  possible way to avoid problems is to exclude highly non-unique values
-  from the index using a partial index condition, but this may not be
-  suitable in many cases.
+  of rows per hash bucket.
+  One possible way to avoid problems is to exclude highly non-unique
+  values from the index using a partial index condition, but this may
+  not be suitable in many cases.
  </para>
 
  <para>
   Like B-Trees, hash indexes perform simple index tuple deletion. This
   is a deferred maintenance operation that deletes index tuples that are
   known to be safe to delete (those whose item identifier's LP_DEAD bit
-  is already set). This is performed speculatively upon each insert,
-  though may not succeed if the page is pinned by another backend.
+  is already set). If an insert finds no space is available on a page we
+  try to avoid creating a new overflow page by attempting to remove dead
+  index tuples. Removal cannot occur if the page is pinned at that time.
   Deletion of dead index pointers also occurs during VACUUM.  If an
   overflow page becomes empty, overflow pages can be recycled for reuse
   in other buckets, though we never return them to the operating system.
@@ -95,9 +99,7 @@
   incrementally expanded.  When a new bucket is to be added to the index,
   exactly one existing bucket will need to be "split", with some of its
   tuples being transferred to the new bucket according to the updated
-  key-to-bucket-number mapping.  This is essentially the same hash table
-  management technique embodied in src/backend/utils/hash/dynahash.c for
-  in-memory hash tables used within PostgreSQL internals.
+  key-to-bucket-number mapping.
  </para>
 
  <para>
@@ -140,8 +142,8 @@
 
  <para>
   Each row in the table indexed is represented by a single index tuple in
-  the hash index. Hash index tuples are stored in the bucket pages, and if
-  they exist, the overflow pages. 
+  the hash index. Hash index tuples are stored in bucket pages, and if
+  they exist, overflow pages. 
   We speed up searches by keeping the index entries in any 
   one index page sorted by hash code, thus allowing binary search to be used
   within an index page.  Note however that there is *no* assumption about the
@@ -151,7 +153,7 @@
  <para>
   The bucket splitting algorithms to expand the hash index are too complex to
   be worthy of mention here, though are described in more detail in
-  src/backend/access/hash/README.
+  <filename>src/backend/access/hash/README</filename>.
   The split algorithm is crash safe and can be restarted if not completed
   successfully.
  </para>
