On 06.05.2015 15:39, Alberto Garcia wrote:
The current cache algorithm traverses the array starting always from
the beginning, so the average number of comparisons needed to perform
a lookup is proportional to the size of the array.

By using a hash of the offset as the starting point, lookups are
faster and independent from the array size.

The hash is computed using the cluster number of the table, multiplied
by 4 to make it perform better when there are collisions.

In my tests, using a cache with 2048 entries, this reduces the average
number of comparisons per lookup from 430 to 2.5.

Signed-off-by: Alberto Garcia <be...@igalia.com>
---
  block/qcow2-cache.c | 9 +++++++--
  1 file changed, 7 insertions(+), 2 deletions(-)

So it's a direct-mapped cache, unless the entry has been used, in which case it becomes a fully associative cache...

Let's assume the cache is full. Now this hash algorithm (direct mapped cache) basically becomes futile, because the LRU algorithm (fully associative cache) takes over, and any entry (the LRU entry) is replaced, independently of the cache. Thus, the entry will most probably not be placed at its hashed position. So once the cache is full, I guess this algorithm doesn't help a lot, does it?

(I'm sorry if you had this discussion before...)

Max

Reply via email to