Updates hash library documentation, reflecting
the new implementation changes.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch at intel.com>
---
 doc/guides/prog_guide/hash_lib.rst | 138 +++++++++++++++++++++++++++++++++----
 doc/guides/prog_guide/index.rst    |   4 ++
 2 files changed, 129 insertions(+), 13 deletions(-)

diff --git a/doc/guides/prog_guide/hash_lib.rst 
b/doc/guides/prog_guide/hash_lib.rst
index 9b83835..193dd53 100644
--- a/doc/guides/prog_guide/hash_lib.rst
+++ b/doc/guides/prog_guide/hash_lib.rst
@@ -1,5 +1,5 @@
 ..  BSD LICENSE
-    Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+    Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
     All rights reserved.

     Redistribution and use in source and binary forms, with or without
@@ -50,8 +50,6 @@ The hash also allows the configuration of some low-level 
implementation related

 *   Hash function to translate the key into a bucket index

-*   Number of entries per bucket
-
 The main methods exported by the hash are:

 *   Add entry with key: The key is provided as input. If a new entry is 
successfully added to the hash for the specified key,
@@ -65,10 +63,26 @@ The main methods exported by the hash are:
 *   Lookup for entry with key: The key is provided as input. If an entry with 
the specified key is found in the hash (lookup hit),
     then the position of the entry is returned, otherwise (lookup miss) a 
negative value is returned.

-The current hash implementation handles the key management only.
-The actual data associated with each key has to be managed by the user using a 
separate table that
+Apart from these method explained above, the API allows the user three more 
options:
+
+*   Add / lookup / delete with key and precomputed hash: Both the key and its 
precomputed hash are provided as input. This allows
+    the user to perform these operations faster, as hash is already computed.
+
+*   Add / lookup with key and data: A pair of key-value is provided as input. 
This allows the user to store
+    not only the key, but also data which may be either a 8-byte integer or a 
pointer to external data (if data size is more than 8 bytes).
+
+*   Combination of the two options above: User can provide key, precomputed 
hash and data.
+
+Also, the API contains a method to allow the user to look up entries in 
bursts, achieving higher performance
+than looking up individual entries, as the function prefetches next entries at 
the time it is operating
+with the first ones, which reduces significantly the impact of the necessary 
memory accesses.
+Notice that this method uses a pipeline of 8 entries (4 stages of 2 entries), 
so it is highly recommended
+to use at least 8 entries per burst.
+
+The actual data associated with each key can be either managed by the user 
using a separate table that
 mirrors the hash in terms of number of entries and position of each entry,
-as shown in the Flow Classification use case describes in the following 
sections.
+as shown in the Flow Classification use case describes in the following 
sections,
+or stored in the hash table itself.

 The example hash tables in the L2/L3 Forwarding sample applications defines 
which port to forward a packet to based on a packet flow identified by the 
five-tuple lookup.
 However, this table could also be used for more sophisticated features and 
provide many other functions and actions that could be performed on the packets 
and flows.
@@ -76,17 +90,26 @@ However, this table could also be used for more 
sophisticated features and provi
 Implementation Details
 ----------------------

-The hash table is implemented as an array of entries which is further divided 
into buckets,
-with the same number of consecutive array entries in each bucket.
-For any input key, there is always a single bucket where that key can be 
stored in the hash,
-therefore only the entries within that bucket need to be examined when the key 
is looked up.
+The hash table has two main tables:
+
+* First table is an array of entries which is further divided into buckets,
+  with the same number of consecutive array entries in each bucket. Each entry 
contains the computed primary
+  and secondary hashes of a given key (explained below), and an index to the 
second table.
+
+* The second table is an array of all the keys stored in the hash table and 
its data associated to each key.
+
+The hash library uses the cuckoo hash method to resolve collisions.
+For any input key, there are two possible buckets (primary and 
secondary/alternative location)
+where that key can be stored in the hash, therefore only the entries within 
those bucket need to be examined
+when the key is looked up.
 The lookup speed is achieved by reducing the number of entries to be scanned 
from the total
-number of hash entries down to the number of entries in a hash bucket,
+number of hash entries down to the number of entries in the two hash buckets,
 as opposed to the basic method of linearly scanning all the entries in the 
array.
 The hash uses a hash function (configurable) to translate the input key into a 
4-byte key signature.
 The bucket index is the key signature modulo the number of hash buckets.
-Once the bucket is identified, the scope of the hash add,
-delete and lookup operations is reduced to the entries in that bucket.
+
+Once the buckets are identified, the scope of the hash add,
+delete and lookup operations is reduced to the entries in those buckets (it is 
very likely that entries are in the primary bucket).

 To speed up the search logic within the bucket, each hash entry stores the 
4-byte key signature together with the full key for each hash entry.
 For large key sizes, comparing the input key against a key from the bucket can 
take significantly more time than
@@ -95,6 +118,95 @@ Therefore, the signature comparison is done first and the 
full key comparison do
 The full key comparison is still necessary, as two input keys from the same 
bucket can still potentially have the same 4-byte hash signature,
 although this event is relatively rare for hash functions providing good 
uniform distributions for the set of input keys.

+Example of lookup:
+
+First of all, the primary bucket is identified and entry is likely to be 
stored there.
+If signature was stored there, we compare its key against the one provided and 
return the position
+where it was stored and/or the data associated to that key if there is a match.
+If signature is not in the primary bucket, the secondary bucket is looked up, 
where same procedure
+is carried out. If there is no match there either, key is considered not to be 
in the table.
+
+Example of addition:
+
+Like lookup, the primary and secondary buckets are indentified. If there is an 
empty slot in
+the primary bucket, primary and secondary signatures are stored in that slot, 
key and data (if any) are added to
+the second table and an index to the position in the second table is stored in 
the slot of the first table.
+If there is no space in the primary bucket, one of the entries on that bucket 
is pushed to its alternative location,
+and the key to be added is inserted in its position.
+To know where the alternative bucket of the evicted entry is, the secondary 
signature is looked up and alternative bucket index
+is calculated from doing the modulo, as seen above. If there is room in the 
alternative bucket, the evicted entry
+is stored in it. If not, same process is repeated (one of the entries gets 
pushed) until a non full bucket is found.
+Notice that despite all the entry movement in the first table, the second 
table is not touched, which would impact
+greatly in performance.
+
+In the very unlikely event that table enters in a loop where same entries are 
being evicted indefinitely,
+key is considered not able to be stored.
+With random keys, this method allows the user to get around 90% of the table 
utilization, without
+having to drop any stored entry (LRU) or allocate more memory (extended 
buckets).
+
+Entry distribution in hash table
+--------------------------------
+
+As mentioned above, Cuckoo hash implementation pushes elements out of their 
bucket,
+if there is a new entry to be added which primary location coincides with 
their current bucket,
+being pushed to their alternative location.
+Therefore, as user adds more entries to the hash table, distribution of the 
hash values
+in the buckets will change, being most of them in their primary location and a 
few in
+their secondary location, which the later will increase, as table gets busier.
+This information is quite useful, as performance will be lower as more entries
+are evicted to their secondary location.
+
+See the tables below showing entry distribution as table utilization increases.
+
+.. _table_hash_lib_1:
+
+.. table:: Entry distribution with 1024 entries
+
+   +--------------+-----------------------+-------------------------+
+   | % Table used | % In Primary location | % In Secondary location |
+   +==============+=======================+=========================+
+   |      25      |         100           |           0             |
+   +--------------+-----------------------+-------------------------+
+   |      50      |         96.1          |           3.9           |
+   +--------------+-----------------------+-------------------------+
+   |      75      |         88.2          |           11.8          |
+   +--------------+-----------------------+-------------------------+
+   |      80      |         86.3          |           13.7          |
+   +--------------+-----------------------+-------------------------+
+   |      85      |         83.1          |           16.9          |
+   +--------------+-----------------------+-------------------------+
+   |      90      |         77.3          |           22.7          |
+   +--------------+-----------------------+-------------------------+
+   |      95.8    |         64.5          |           35.5          |
+   +--------------+-----------------------+-------------------------+
+
+|
+
+.. _table_hash_lib_2:
+
+.. table:: Entry distribution with 1 million entries
+
+   +--------------+-----------------------+-------------------------+
+   | % Table used | % In Primary location | % In Secondary location |
+   +==============+=======================+=========================+
+   |      50      |         96            |           4             |
+   +--------------+-----------------------+-------------------------+
+   |      75      |         86.9          |           13.1          |
+   +--------------+-----------------------+-------------------------+
+   |      80      |         83.9          |           16.1          |
+   +--------------+-----------------------+-------------------------+
+   |      85      |         80.1          |           19.9          |
+   +--------------+-----------------------+-------------------------+
+   |      90      |         74.8          |           25.2          |
+   +--------------+-----------------------+-------------------------+
+   |      94.5    |         67.4          |           32.6          |
+   +--------------+-----------------------+-------------------------+
+
+.. note::
+
+   Last values on the tables above are the average maximum table
+   utilization with random keys and using Jenkins hash function.
+
 Use Case: Flow Classification
 -----------------------------

diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 3295661..036640c 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -241,3 +241,7 @@ Programmer's Guide
 :numref:`table_qos_33` :ref:`table_qos_33`

 :numref:`table_qos_34` :ref:`table_qos_34`
+
+:numref:`table_hash_lib_1` :ref:`table_hash_lib_1`
+
+:numref:`table_hash_lib_2` :ref:`table_hash_lib_2`
-- 
2.4.3

Reply via email to