well....The database and the applications accessing the database are all
located on the same machine, so distribution across multiple machines
doesn't apply here.   The system is designed so that only one application
handles all the writes to the DB.   Another application handles all the
reads, and there may be up to two instances of that application running at
any one time, so I guess that shows a small number of clients.   When the
application that reads the DB data starts, it reads *all* the data in the
DB and ships it elsewhere.

I anticipate 2 bottlenecks...

1. My anticipated bottleneck under postgres is that the DB-writing app.
must parse incoming bursts of data and store in the DB.  The machine
sending this data is seeing a delay in processing.  Debugging has shown
that the INSERTS (on the order of a few thousand) is where most of the time
is wasted.

2. The other bottleneck is data retrieval.  My DB-reading application must
read the DB record-by-record (opens a cursor and reads one-by-one), build
the data into a message according to a system ICD, and ship it out.
postgres (postmaster) CPU usage is hovering around 85 - 90% at this time.

The expansion of data will force me to go from a maximum 3400 row table to
a maximum of 11560.

>From what I gather in reading about SQLite, it seems to be better equipped
for performance.  All my testing of the current system points to postgres
(postmaster) being my bottleneck.

Jason Alburger
HID/NAS/LAN Engineer
L3/ATO-E En Route Peripheral Systems Support
609-485-7225


                                                                           
             [EMAIL PROTECTED]                                                 
                                                                           
             03/01/2006 09:54                                           To 
             AM                        sqlite-users@sqlite.org             
                                                                        cc 
                                                                           
             Please respond to                                     Subject 
             [EMAIL PROTECTED]         Re: [sqlite] performance statistics 
                  te.org                                                   
                                                                           
                                                                           
                                                                           
                                                                           
                                                                           




[EMAIL PROTECTED] wrote:
>
> I am currently investigating porting my project from postgres to SQLite
due
> to anticipated performance issues
>

I do not thing speed should really be the prime consideration
here.  PostgreSQL and SQLite solve very different problems.
I think you should choose the system that is the best map to
the problem you are trying to solve.

PostgreSQL is designed to support a large number of clients
distributed across multiple machines and accessing a relatively
large data store that is in a fixed location.  PostgreSQL is
designed to replace Oracle.

SQLite is designed to support a smaller number of clients
all located on the same host computer and accessing a portable
data store of only a few dozen gigabytes which is eaily copied
or moved.  SQLite is designed to replace fopen().

Both SQLite and PostgreSQL can be used to solve problems outside
their primary focus.  And so a high-end use of SQLite will
certainly overlap a low-end use of PostgreSQL.  But you will
be happiest if you will use them both for what they were
originally designed for.

If you give us some more clues about what your requirements
are we can give you better guidance about which database might
be the best choice.

--
D. Richard Hipp   <[EMAIL PROTECTED]>

Reply via email to