On 24 October 2013 12:50, Don Higgins <d...@higgins.net> wrote:

> I see discussion about optimizing the binary search, but I don't see any
> discussion about optimizing linear search which might make it much faster
> than binary depending on the search history.  Putting the last key found at
> front of most recently used list or just holding last found as a first test
> may result in significant reduction in total search compares if there are
> frequent repeat requests.
>

Sure, we made no assumption about the distribution of keys and searches and
assumed both random. In some cases you can generalize relevant information
about the distribution (like sorting an array that is already pretty good
sorted). But often it is more convenient to separate the two cases and
treat them separate using a bimodal distribution.

Your good suggestions sound like whether we want to split a search over N
items in two over M and N-M respectively (or M and N even, if N >> M). When
M < log(N)  this is interesting, provided the hit ratio for [M] is high
enough and the cost  to maintain the content of [M] is not too high. The
case with M=1 might provide interesting options because it can be
maintained fairly cheap (but makes it 1 + log(N) for the rest).

Rob

Reply via email to