You might want to check out kazlib for your data structure lookups. It cantains code to implement Linked List, Hast, and Dictionary access data structures. The hashing code is really quite fast for in memory retrievals plus it is dynamic so that you don't have to preconfigure your hash table size. The linked list code is pretty good, it does have the ability to create Memory Pools (node pools) for the list structures. That way the package is not continually calling malloc and free for every node insert/delete etc.. Lloyd <[EMAIL PROTECTED]> wrote: On Wed, 2007-04-11 at 10:00 -0500, P Kishor wrote: > I think, looking from Lloyd's email address, (s)he might be limited to > what CDAC, Trivandrum might be providing its users. > > Lloyd, you already know what size your data sets are. Esp. if it > doesn't change, putting the entire dataset in RAM is the best option. > If you don't need SQL capabilities, you probably can just use > something like BerkeleyDB or DBD::Deep (if using Perl), and that will > be plenty fast. Of course, if it can't be done then it can't be done, > and you will have to recommend more RAM for the machines (the CPU > seems fast enough, just the memory may be a bottleneck).
Sorry, I am not talking about the limitations of the system in our side, but end user who uses our software. I want the tool to be run at its best on a low end machine also. I don't want the capabilities of a data base here. Just want to store data, search for presence, remove it when there is no more use of it. Surely I will check out BerkeleyDB. The data set must be in ram, because the total size of it is very small. (Few maga bytes) I just want to spped up the search, which is done millions of times. Thanks, LLoyd ______________________________________ Scanned and protected by Email scanner ----------------------------------------------------------------------------- To unsubscribe, send email to [EMAIL PROTECTED] -----------------------------------------------------------------------------