On 03/30/2010 03:24 PM, Mirage ha wrote:

Dear All,

I do not know if this is the correct place to post this question or not but as you have experience in embedded field i expected
you will help me.

I facing a problem in choosing database engine for my application my manager suggested to use files (e.g. txt files ) , i suggested to use berkeley db. So could you tell me which is better and if there is better solution ( better db engine) please tell me. also if there is link to good database benchmark comparison please send it.

Thank you,
M.

Depending on the application, I might side with your manager, raw txt or binary files can be better in many situations.

Depends a lot on the workload, explain the application in general.

What is the most frequent operation a write, read( or search), or delete? Raw files, txt or binary write() it is very fast. Reading the file or seraching for something inside a large file can be slow without indexes, or the ability to binary search. If the data in the file is already sorted, for example by an incremeting ID, or time, binary search inside the file can be very fast. Appending data to the end of a file can be a O(1) operation. A DB engine after the write or delete, may have to update it's internal indexes, and rebalance its B-Tree, or update any other internal structures.


How much data? How much I/O?

Is program size important? writing to a file is much smaller program footprint than a DB engine.

Optimizing for write or erase speed raw files will be faster. Optimizing for a fast search of data a DB engine will be faster unless you binary search your data in the files, or index it yourself. If your going through the trouble of using your own complex indexes, then you might as well use a DB engine to avoid re-inventing the wheel.

--
Karl


Reply via email to