If it is just a read-only access to data then storing the data im memory with an index which can be either a hashing method or a binary tree would be the fastest. An easy to handle method is to store the data and index in a flat file and load it into memory. Loading it in virtual memory gives you more flexibility and gets more performance from your available memory because when memory is constrained only your most accessed data stays in memory rather than your application failing.

P Kishor wrote:
On 4/11/07, Martin Jenkins <[EMAIL PROTECTED]> wrote:

Lloyd wrote:
> hi Puneet and Martin,
> On Wed, 2007-04-11 at 14:27 +0100, Martin Jenkins wrote:
>> File system cache and plenty of RAM?
>>
>
> It is meant to run on an end user system (eg. Pentium 4 1GB RAM). If you
> mean Swap space as file system cache, it is also limited, may be 2GB.

I was just wondering what the odds were of doing a better job than the
filing system pros, how much time/code that would take on your part and
how much that time would cost versus speccing a bigger/faster machine.



I think, looking from Lloyd's email address, (s)he might be limited to
what CDAC, Trivandrum might be providing its users.

Lloyd, you already know what size your data sets are. Esp. if it
doesn't change, putting the entire dataset in RAM is the best option.
If you don't need SQL capabilities, you probably can just use
something like BerkeleyDB or DBD::Deep (if using Perl), and that will
be plenty fast. Of course, if it can't be done then it can't be done,
and you will have to recommend more RAM for the machines (the CPU
seems fast enough, just the memory may be a bottleneck).




-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to