On Sat, 19 Sep 2015 14:24:24 +0100
Rob Willett <mail.robertwillett.com at postfix.robertwillett.com> wrote:

> Thanks. We?ve got 100 requests a second which may be enough. We?ll
> keep looking though for any time.

I suppose you know that 100/sec is about 0.1% of what the machine is
capable of.  

You spoke of read-only data that changes infrequently, and you wanted
maximum speed.  I would sort them into a static C array, and use
std::lower_bound to search it.  I would expose that as a function in a
shared library, and publish updates by updating the shared library.  I
would expect at least 100,000 invocations per second, with the added
benefit that the iterator returned by lower_bound instantly answers the
question of existence for the provide string.  

Everything DHR said of advantages to using SQLite is true.  If what you
want is to minimize lookup time on static data, though, searching sorted
data will give you better locality of reference and fewer machine
instructions than any interpreted b-tree.  

HTH.  

--jkl

Reply via email to