Re: [sqlite] Performance statistics?

2008-04-25 Thread Richard Klein
> Richard Klein wrote:
>> Does SQLite have a mechanism, in addition to the
>> ANALYZE statement, for recording and dumping
>> performance statistics?
>>
> 
> What kind of performance statistics are you looking for?
> 
> SQLiteSpy (see 
> http://www.yunqa.de/delphi/doku.php/products/sqlitespy/index) measures 
> the execution time of each SQL statement to help you optimize your SQL.
> 
> Dennis Cote

I was thinking of something like the tools that Oracle provides
to assist with performance monitoring and tuning:  ADDM, TKProf,
Statspack.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance statistics?

2008-04-25 Thread Dennis Cote
Richard Klein wrote:
> Does SQLite have a mechanism, in addition to the
> ANALYZE statement, for recording and dumping
> performance statistics?
> 

What kind of performance statistics are you looking for?

SQLiteSpy (see 
http://www.yunqa.de/delphi/doku.php/products/sqlitespy/index) measures 
the execution time of each SQL statement to help you optimize your SQL.

Dennis Cote


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] performance statistics

2006-03-01 Thread Jim C. Nasby
On Wed, Mar 01, 2006 at 09:25:02AM -0500, [EMAIL PROTECTED] wrote:
> I am currently investigating porting my project from postgres to SQLite due
> to anticipated performance issues (we will have to start handling lots more
> data).  My initial speed testing of handling the expanded amount data has
> suggested that the postgres performance will be unacceptable.  I'm
> convinced that SQLite will solve my performance issues, however, the speed
> comparison data found on the SQLite site (http://www.sqlite.org/speed.html)
> is old.  This is the type of data I need, but I'd like to have more recent
> data to present to my manager, if it is available.  Can anybody point me
> anywhere that may have similar but more recent data?

What tuning have you done to PostgreSQL? The out-of-the-box
postgresql.conf is *VERY* conservative; it's meant to get you up and
running, not provide good performance.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461


Re: [sqlite] performance statistics

2006-03-01 Thread Jim C. Nasby
On Wed, Mar 01, 2006 at 05:42:57PM +0100, Denis Sbragion wrote:
> Hello Andrew,
> 
> On Wed, March 1, 2006 17:31, Andrew Piskorski wrote:
> > Is that in fact true?  I am not familiar with how PostgreSQL
> > implements the SERIALIZABLE isolation level, but I assume that
> > PostgreSQL's MVCC would still give some advantage even under
> > SERIALIZABLE: It should allow the readers and (at least one of) the
> > writers to run concurrently.  Am I mistaken?
> 
> PostgreSQL always played the "readers are never blocked" mantra. Nevertheless
> I really wonder how the strict serializable constraints could be satisfied
> without blocking the readers while a write is in place.

Simple: readers have to handle the possibility that they'll need to
re-run their transaction. From http://lnk.nu/postgresql.org/8gf.html:

 UPDATE, DELETE, SELECT FOR UPDATE, and SELECT FOR SHARE commands behave
 the same as SELECT in terms of searching for target rows: they will
 only find target rows that were committed as of the transaction start
 time. However, such a target row may have already been updated (or
 deleted or locked) by another concurrent transaction by the time it is
 found. In this case, the serializable transaction will wait for the
 first updating transaction to commit or roll back (if it is still in
 progress). If the first updater rolls back, then its effects are
 negated and the serializable transaction can proceed with updating the
 originally found row. But if the first updater commits (and actually
 updated or deleted the row, not just locked it) then the serializable
 transaction will be rolled back with the message

 ERROR:  could not serialize access due to concurrent update

 because a serializable transaction cannot modify or lock rows changed
 by other transactions after the serializable transaction began. 
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461


Re: [sqlite] performance statistics

2006-03-01 Thread Jim C. Nasby
On Wed, Mar 01, 2006 at 05:23:05PM +0100, Denis Sbragion wrote:
> Insert records as "processing by writer", update them to "ready to be
> processed" with a single atomic update after a burst of inserts, update the
> status of all "ready to be processed" records to the "to be processed by
> reader" status with another single atomic update in the reader, process all
> the "to be processed by reader" records, mark all the "to be processed by
> reader" records as "processed" again with a single atomic update when
> finished, if needed delete "processed" records.

FWIW, the performance of that would be pretty bad in most MVCC
databases, because you can't do an update 'in place' (Ok, Oracle can,
but they still have to write both undo and redo log info, so it's
effectively the same as not being 'in place' unless you have a lot of
indexes and you're not touching indexed rows).
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461


Re: [sqlite] performance statistics

2006-03-01 Thread drh
Andrew Piskorski <[EMAIL PROTECTED]> wrote:
> On Wed, Mar 01, 2006 at 10:53:12AM -0500, [EMAIL PROTECTED] wrote:
> > If you use READ COMMITTED isolation (the default in PostgreSQL)
> 
> > If it is a problem,
> > then you need to select SERIALIZABLE isolation in PostgreSQL
> > in which case the MVCC is not going to give you any advantage
> > over SQLite.
> 
> Is that in fact true?  I am not familiar with how PostgreSQL
> implements the SERIALIZABLE isolation level, but I assume that
> PostgreSQL's MVCC would still give some advantage even under
> SERIALIZABLE: It should allow the readers and (at least one of) the
> writers to run concurrently.  Am I mistaken?
> 

Well.  On second thought, you might be right.  I guess it
depends on how PostgreSQL implements SERIALIZABLE.  Perhaps
somebody with a better knowledge of the inner workings of
PostgreSQL can answer with more authority.

--
D. Richard Hipp   <[EMAIL PROTECTED]>



Re: [sqlite] performance statistics

2006-03-01 Thread Jim Dodgen
Quoting [EMAIL PROTECTED]:
> 
> I anticipate 2 bottlenecks...
> 
> 1. My anticipated bottleneck under postgres is that the DB-writing app.
> must parse incoming bursts of data and store in the DB.  The machine
> sending this data is seeing a delay in processing.  Debugging has shown
> that the INSERTS (on the order of a few thousand) is where most of the time
> is wasted.

I would wrap the "bursts" in a transaction if you can (begin; and commit; 
statements)

> 
> 2. The other bottleneck is data retrieval.  My DB-reading application must
> read the DB record-by-record (opens a cursor and reads one-by-one), build
> the data into a message according to a system ICD, and ship it out.
> postgres (postmaster) CPU usage is hovering around 85 - 90% at this time.
>

I do a simular thing in my application, what I do is to snapshot (copy) the 
database (A sqlite database is a single file) and then run my batch process 
against the copy. 
 
> The expansion of data will force me to go from a maximum 3400 row table to
> a maximum of 11560.

My tables are a simular size

> 
> From what I gather in reading about SQLite, it seems to be better equipped
> for performance.  All my testing of the current system points to postgres
> (postmaster) being my bottleneck.
> 
> Jason Alburger
> HID/NAS/LAN Engineer
> L3/ATO-E En Route Peripheral Systems Support
> 609-485-7225
> 
> 
>
>  [EMAIL PROTECTED]
>  
>
>  03/01/2006 09:54   To 
>  AMsqlite-users@sqlite.org 
> cc 
>
>          Please respond to Subject 
>  [EMAIL PROTECTED] Re: [sqlite] performance statistics 
>   te.org   
>
>
>
>
>
> 
> 
> 
> 
> [EMAIL PROTECTED] wrote:
> >
> > I am currently investigating porting my project from postgres to SQLite
> due
> > to anticipated performance issues
> >
> 
> I do not thing speed should really be the prime consideration
> here.  PostgreSQL and SQLite solve very different problems.
> I think you should choose the system that is the best map to
> the problem you are trying to solve.
> 
> PostgreSQL is designed to support a large number of clients
> distributed across multiple machines and accessing a relatively
> large data store that is in a fixed location.  PostgreSQL is
> designed to replace Oracle.
> 
> SQLite is designed to support a smaller number of clients
> all located on the same host computer and accessing a portable
> data store of only a few dozen gigabytes which is eaily copied
> or moved.  SQLite is designed to replace fopen().
> 
> Both SQLite and PostgreSQL can be used to solve problems outside
> their primary focus.  And so a high-end use of SQLite will
> certainly overlap a low-end use of PostgreSQL.  But you will
> be happiest if you will use them both for what they were
> originally designed for.
> 
> If you give us some more clues about what your requirements
> are we can give you better guidance about which database might
> be the best choice.
> 
> --
> D. Richard Hipp   <[EMAIL PROTECTED]>
> 
> 







Re: [sqlite] performance statistics

2006-03-01 Thread Denis Sbragion
Hello Andrew,

On Wed, March 1, 2006 17:31, Andrew Piskorski wrote:
> Is that in fact true?  I am not familiar with how PostgreSQL
> implements the SERIALIZABLE isolation level, but I assume that
> PostgreSQL's MVCC would still give some advantage even under
> SERIALIZABLE: It should allow the readers and (at least one of) the
> writers to run concurrently.  Am I mistaken?

PostgreSQL always played the "readers are never blocked" mantra. Nevertheless
I really wonder how the strict serializable constraints could be satisfied
without blocking the readers while a write is in place.

Bye,

-- 
Denis Sbragion
InfoTecna
Tel: +39 0362 805396, Fax: +39 0362 805404
URL: http://www.infotecna.it



Re: [sqlite] performance statistics

2006-03-01 Thread Jay Sprenkle
> My question is not about extending/improving SQLite but about having an
> extra tool which helps to optimize the SQL written for SQLite. So SQLite
> stays indeed lightweight and fast, but the SQL it is fed with is
> automatically optimized.

Like I said, the optimizer tool is the programmer.
In a lot of cases the sql in a program doesn't change so the best
place to optimize it would
be when the program is designed, not at query time.
If anyone wrote a tool like that I'm sure it would be useful.


Re: [sqlite] performance statistics

2006-03-01 Thread Andrew Piskorski
On Wed, Mar 01, 2006 at 10:53:12AM -0500, [EMAIL PROTECTED] wrote:
> If you use READ COMMITTED isolation (the default in PostgreSQL)

> If it is a problem,
> then you need to select SERIALIZABLE isolation in PostgreSQL
> in which case the MVCC is not going to give you any advantage
> over SQLite.

Is that in fact true?  I am not familiar with how PostgreSQL
implements the SERIALIZABLE isolation level, but I assume that
PostgreSQL's MVCC would still give some advantage even under
SERIALIZABLE: It should allow the readers and (at least one of) the
writers to run concurrently.  Am I mistaken?

-- 
Andrew Piskorski <[EMAIL PROTECTED]>
http://www.piskorski.com/


Re: [sqlite] performance statistics

2006-03-01 Thread Ran
My question is not about extending/improving SQLite but about having an
extra tool which helps to optimize the SQL written for SQLite. So SQLite
stays indeed lightweight and fast, but the SQL it is fed with is
automatically optimized.

Ran

On 3/1/06, Jay Sprenkle <[EMAIL PROTECTED]> wrote:
>
> On 3/1/06, Ran <[EMAIL PROTECTED]> wrote:
> > In light of your answer, I wonder if it is possible to implement such
> > optimizer that does the hand-optimizing automatically, but of course
> BEFORE
> > they are actually being used by SQLite.
> >
> > So the idea is not to make SQLite optimizer better, but to create a kind
> of
> > SQL optimizer that gets as input SQL statements and gives as output
> > optimized (specifically for SQLite) SQL statements.
>
> I think the concept so far has been that the programmer is the query
> optimizer so it stays fast and lightweight. ;)
>


Re: [sqlite] performance statistics

2006-03-01 Thread Denis Sbragion
Hello DRH,

On Wed, March 1, 2006 16:53, [EMAIL PROTECTED] wrote:
...
> If you use READ COMMITTED isolation (the default in PostgreSQL)
> then your writes are not atomic as seen by the reader.  In other
...
> then you need to select SERIALIZABLE isolation in PostgreSQL
> in which case the MVCC is not going to give you any advantage
> over SQLite.

indeed. Another trick which may be useful and that we often used in our
applications, which sometimes have similar needs: use an explicity "status"
field to mark the record situation.

Insert records as "processing by writer", update them to "ready to be
processed" with a single atomic update after a burst of inserts, update the
status of all "ready to be processed" records to the "to be processed by
reader" status with another single atomic update in the reader, process all
the "to be processed by reader" records, mark all the "to be processed by
reader" records as "processed" again with a single atomic update when
finished, if needed delete "processed" records.

This kind of approach requires just an index on the status field and is also
really useful when something goes wrong (application bug, power outage and so
on) because it becomes pretty easy to reprocess all the unprocessed records
just by looking at the status. The end results should be pretty similar to the
use of temporary tables, but without the need of additional tables.

Bye,

-- 
Dr. Denis Sbragion
InfoTecna
Tel: +39 0362 805396, Fax: +39 0362 805404
URL: http://www.infotecna.it



Re: [sqlite] performance statistics

2006-03-01 Thread Clay Dowling

[EMAIL PROTECTED] said:
> 1. My anticipated bottleneck under postgres is that the DB-writing app.
> must parse incoming bursts of data and store in the DB.  The machine
> sending this data is seeing a delay in processing.  Debugging has shown
> that the INSERTS (on the order of a few thousand) is where most of the
> time
> is wasted.

Jason,

You might be better performance simply by wrapping the insert into a
transaction, or wrapping a transaction around a few hundred inserts at a
time.  A transaction is a very expensive operation, and unless you group
your inserts into transactions of several inserts, you pay the transaction
price for each single insert.  That has a devastating impact on
performance no matter what database you're using, so long as it's ACID
compliant.

SQLite is a wonderful tool and absolutely saving my bacon on a current
project, but you can save yourself the trouble of rewriting your database
access by making a slight modification to your code.  This assumes, of
course, that you aren't already using transactions.

Clay Dowling
-- 
Simple Content Management
http://www.ceamus.com



Re: [sqlite] performance statistics

2006-03-01 Thread Derrell . Lipman
[EMAIL PROTECTED] writes:

> PostgreSQL has a much better query optimizer than SQLite.
> (You can do that when you have a multi-megabyte memory footprint
> budget versus 250KiB for SQLite.)  In your particular case,
> I would guess you could get SQLite to run as fast or faster
> than PostgreSQL by hand-optimizing your admittedly complex
> queries.

In this light, I had a single query that took about 24 *hours* to complete in
sqlite (2.8.x).  I hand optimized the query by breaking it into multiple (14
I think) separate sequential queries which generate temporary tables for the
next query to work with, and building some indexes on the temporary tables.
The 24 hour query was reduced to a few *seconds*.

Query optimization is critical for large queries in sqlite, and sqlite can be
made VERY fast if you take the time to optimize the queries that are taking a
long time to execute.

Derrell


Re: [sqlite] performance statistics

2006-03-01 Thread drh
"Denis Sbragion" <[EMAIL PROTECTED]> wrote:
> Furthermore having both a reader
> and a writer at the same time the MVCC "better than row level locking"
> mechanism might provide you better performances than SQLite, but here the
> devil's in the detail.

"D. Richard Hipp" <[EMAIL PROTECTED]> wrote:
> Since PostgreSQL supports READ COMMITTED isolation by default, the
> writer lock will not be a problem there.  But you will have the same
> issue on PosgreSQL if you select SERIALIZABLE isolation.  SQLite only
> does SERIALIZABLE for database connections running in separate
> processes.

To combine and clarify our remarks:

If you use READ COMMITTED isolation (the default in PostgreSQL)
then your writes are not atomic as seen by the reader.  In other
words, if a burst of inserts occurs while a read is in process,
the read might end up seeing some old data from before the burst
and some new data from afterwards.  This may or may not be a
problem for you depending on your application.  If it is a problem,
then you need to select SERIALIZABLE isolation in PostgreSQL
in which case the MVCC is not going to give you any advantage
over SQLite.

--
D. Richard Hipp   <[EMAIL PROTECTED]>



Re: [sqlite] performance statistics

2006-03-01 Thread Jay Sprenkle
On 3/1/06, Ran <[EMAIL PROTECTED]> wrote:
> In light of your answer, I wonder if it is possible to implement such
> optimizer that does the hand-optimizing automatically, but of course BEFORE
> they are actually being used by SQLite.
>
> So the idea is not to make SQLite optimizer better, but to create a kind of
> SQL optimizer that gets as input SQL statements and gives as output
> optimized (specifically for SQLite) SQL statements.

I think the concept so far has been that the programmer is the query
optimizer so it stays fast and lightweight. ;)


Re: [sqlite] performance statistics

2006-03-01 Thread drh
[EMAIL PROTECTED] wrote:
> wellThe database and the applications accessing the database are all
> located on the same machine, so distribution across multiple machines
> doesn't apply here.   The system is designed so that only one application
> handles all the writes to the DB.   Another application handles all the
> reads, and there may be up to two instances of that application running at
> any one time, so I guess that shows a small number of clients.   When the
> application that reads the DB data starts, it reads *all* the data in the
> DB and ships it elsewhere.

I think either SQLite or PostgreSQL would be appropriate here.  I'm
guessing that SQLite will have the speed advantage in this particular
case if you are careful in how you code it up.

> 
> I anticipate 2 bottlenecks...
> 
> 1. My anticipated bottleneck under postgres is that the DB-writing app.
> must parse incoming bursts of data and store in the DB.  The machine
> sending this data is seeing a delay in processing.  Debugging has shown
> that the INSERTS (on the order of a few thousand) is where most of the time
> is wasted.

You will do well to gather your incoming data into a TEMP table then
insert the whole wad into the main database all in one go using
something like this:

INSERT INTO maintable SELECT * FROM temptable;
DELETE FROM temptable;

Actually, this same trick might solve your postgresql performance
problem and thus obviate the need to port your code.

> 
> 2. The other bottleneck is data retrieval.  My DB-reading application must
> read the DB record-by-record (opens a cursor and reads one-by-one), build
> the data into a message according to a system ICD, and ship it out.
> postgres (postmaster) CPU usage is hovering around 85 - 90% at this time.
> 
> The expansion of data will force me to go from a maximum 3400 row table to
> a maximum of 11560.

Unless each row is particularly large, this is not a very big database
and should not present a problem to either SQLite or PostgreSQL.  Unless
you are doing some kind of strange join that you haven't told us about.

If your data formatting takes a long time, the reader might block the
writer in SQLite.  The writer process will have to wait to do its write
until the reader has finished.  You can avoid this by making a copy of
the data to be read into a temporary table before formatting it:

CREATE TEMP TABLE outbuf AS SELECT * FROM maintable;
SELECT * FROM outbuf;
  -- Do your formatting and sending
DROP TABLE outbuf;

Since PostgreSQL supports READ COMMITTED isolation by default, the
writer lock will not be a problem there.  But you will have the same
issue on PosgreSQL if you select SERIALIZABLE isolation.  SQLite only
does SERIALIZABLE for database connections running in separate
processes.
--
D. Richard Hipp   <[EMAIL PROTECTED]>



Re: [sqlite] performance statistics

2006-03-01 Thread Denis Sbragion
Hello Jason,

On Wed, March 1, 2006 16:20, [EMAIL PROTECTED] wrote:
...
> 1. My anticipated bottleneck under postgres is that the DB-writing app.
> must parse incoming bursts of data and store in the DB.  The machine
> sending this data is seeing a delay in processing.  Debugging has shown
> that the INSERTS (on the order of a few thousand) is where most of the time
> is wasted.
>
> 2. The other bottleneck is data retrieval.  My DB-reading application must
> read the DB record-by-record (opens a cursor and reads one-by-one), build
> the data into a message according to a system ICD, and ship it out.
> postgres (postmaster) CPU usage is hovering around 85 - 90% at this time.
...

though your application seems a good candidate for SQLite use, have you tried
surrounding each burst of inserts and reads in a single transaction? With
PostgreSQL, but also with SQLite, performances might increase dramatically
with proper transaction handling in place. Furthermore having both a reader
and a writer at the same time the MVCC "better than row level locking"
mechanism might provide you better performances than SQLite, but here the
devil's in the detail. A lot depends on how much the read and write operations
overlap each others.

Bye,

-- 
Denis Sbragion
InfoTecna
Tel: +39 0362 805396, Fax: +39 0362 805404
URL: http://www.infotecna.it



Re: [sqlite] performance statistics

2006-03-01 Thread Ran
In light of your answer, I wonder if it is possible to implement such
optimizer that does the hand-optimizing automatically, but of course BEFORE
they are actually being used by SQLite.

So the idea is not to make SQLite optimizer better, but to create a kind of
SQL optimizer that gets as input SQL statements and gives as output
optimized (specifically for SQLite) SQL statements.

Ran

On 3/1/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> PostgreSQL has a much better query optimizer than SQLite.
> (You can do that when you have a multi-megabyte memory footprint
> budget versus 250KiB for SQLite.)  In your particular case,
> I would guess you could get SQLite to run as fast or faster
> than PostgreSQL by hand-optimizing your admittedly complex
> queries.
> --
> D. Richard Hipp   <[EMAIL PROTECTED]>
>
>


Re: [sqlite] performance statistics

2006-03-01 Thread Denis Sbragion
Hello Serge,

On Wed, March 1, 2006 16:11, Serge Semashko wrote:
...
> I'm in no way a database expert, but the tests on the benchmarking page
> seem a bit trivial and looks like they only test database API (data
> fetching throughoutput), but not the engine performance. I would like to
> see some benchmarks involving really huge databases and complicated
> queries and wonder if the results will be similar to those I have
> observed...

those benchmarks target the primary use of SQLite, which isn't the same as
other database engines, as perfectly explained by DRH himself. Even though its
performances and rich feature list might make us forget which is the intended
use of SQLite, we must remember that it is firt of all a compact, lightweight,
excellent *embedded* database engine. SQLite simply isn't designed for huge
databases and complicated queries, even though most of the times it is able to
cope with both, being at least a bit more than an fopen() replacement. Don't
be shy Dr. Hipp! :)

Bye,

-- 
Denis Sbragion
InfoTecna
Tel: +39 0362 805396, Fax: +39 0362 805404
URL: http://www.infotecna.it



Re: [sqlite] performance statistics

2006-03-01 Thread jason . ctr . alburger




wellThe database and the applications accessing the database are all
located on the same machine, so distribution across multiple machines
doesn't apply here.   The system is designed so that only one application
handles all the writes to the DB.   Another application handles all the
reads, and there may be up to two instances of that application running at
any one time, so I guess that shows a small number of clients.   When the
application that reads the DB data starts, it reads *all* the data in the
DB and ships it elsewhere.

I anticipate 2 bottlenecks...

1. My anticipated bottleneck under postgres is that the DB-writing app.
must parse incoming bursts of data and store in the DB.  The machine
sending this data is seeing a delay in processing.  Debugging has shown
that the INSERTS (on the order of a few thousand) is where most of the time
is wasted.

2. The other bottleneck is data retrieval.  My DB-reading application must
read the DB record-by-record (opens a cursor and reads one-by-one), build
the data into a message according to a system ICD, and ship it out.
postgres (postmaster) CPU usage is hovering around 85 - 90% at this time.

The expansion of data will force me to go from a maximum 3400 row table to
a maximum of 11560.

>From what I gather in reading about SQLite, it seems to be better equipped
for performance.  All my testing of the current system points to postgres
(postmaster) being my bottleneck.

Jason Alburger
HID/NAS/LAN Engineer
L3/ATO-E En Route Peripheral Systems Support
609-485-7225


   
 [EMAIL PROTECTED] 
   
 03/01/2006 09:54   To 
 AMsqlite-users@sqlite.org 
cc 
   
 Please respond to Subject 
 [EMAIL PROTECTED]         Re: [sqlite] performance statistics 
  te.org   
   
   
   
   
   




[EMAIL PROTECTED] wrote:
>
> I am currently investigating porting my project from postgres to SQLite
due
> to anticipated performance issues
>

I do not thing speed should really be the prime consideration
here.  PostgreSQL and SQLite solve very different problems.
I think you should choose the system that is the best map to
the problem you are trying to solve.

PostgreSQL is designed to support a large number of clients
distributed across multiple machines and accessing a relatively
large data store that is in a fixed location.  PostgreSQL is
designed to replace Oracle.

SQLite is designed to support a smaller number of clients
all located on the same host computer and accessing a portable
data store of only a few dozen gigabytes which is eaily copied
or moved.  SQLite is designed to replace fopen().

Both SQLite and PostgreSQL can be used to solve problems outside
their primary focus.  And so a high-end use of SQLite will
certainly overlap a low-end use of PostgreSQL.  But you will
be happiest if you will use them both for what they were
originally designed for.

If you give us some more clues about what your requirements
are we can give you better guidance about which database might
be the best choice.

--
D. Richard Hipp   <[EMAIL PROTECTED]>



Re: [sqlite] performance statistics

2006-03-01 Thread drh
Serge Semashko <[EMAIL PROTECTED]> wrote:
>> 
> We started with using sqlite3, but the database has grown now to
> something like 1GB and has millions of rows. It does not perform as fast
> as we would like, so we looked for alternatives. We tried to convert
> it to both mysql and postgresql and tried to run the same query we are
> using quite often (the query is rather big and contains a lot of
> conditions, but it extracts only about a hundred matching rows). The
> result was a bit surprising. Mysql just locked down and could not
> provide any results. After killing it, increasing memory limits in its
> configuration to use all the available memory, it managed to complete
> the query but was still slower than sqlite3 (lost about 30%). Postgresql
> on the other hand was a really nice surprise and it was several times
> faster than sqlite3! Now we are converting to postgresql :)
> 

PostgreSQL has a much better query optimizer than SQLite.
(You can do that when you have a multi-megabyte memory footprint
budget versus 250KiB for SQLite.)  In your particular case,
I would guess you could get SQLite to run as fast or faster
than PostgreSQL by hand-optimizing your admittedly complex
queries.
--
D. Richard Hipp   <[EMAIL PROTECTED]>



Re: [sqlite] performance statistics

2006-03-01 Thread Serge Semashko

[EMAIL PROTECTED] wrote:

I am currently investigating porting my project from postgres to 
SQLite due to anticipated performance issues (we will have to start 
handling lots more data).  My initial speed testing of handling the 
expanded amount data has suggested that the postgres performance will

 be unacceptable.  I'm convinced that SQLite will solve my
performance issues, however, the speed comparison data found on the
SQLite site (http://www.sqlite.org/speed.html) is old.  This is the
type of data I need, but I'd like to have more recent data to present
to my manager, if it is available.  Can anybody point me anywhere
that may have similar but more recent data?

Thanks in advance!

Jason Alburger HID/NAS/LAN Engineer L3/ATO-E En Route Peripheral 
Systems Support 609-485-7225


Actually I have quite the opposite experience :)

We started with using sqlite3, but the database has grown now to
something like 1GB and has millions of rows. It does not perform as fast
as we would like, so we looked for alternatives. We tried to convert
it to both mysql and postgresql and tried to run the same query we are
using quite often (the query is rather big and contains a lot of
conditions, but it extracts only about a hundred matching rows). The
result was a bit surprising. Mysql just locked down and could not
provide any results. After killing it, increasing memory limits in its
configuration to use all the available memory, it managed to complete
the query but was still slower than sqlite3 (lost about 30%). Postgresql
on the other hand was a really nice surprise and it was several times
faster than sqlite3! Now we are converting to postgresql :)

I'm in no way a database expert, but the tests on the benchmarking page
seem a bit trivial and looks like they only test database API (data
fetching throughoutput), but not the engine performance. I would like to
see some benchmarks involving really huge databases and complicated
queries and wonder if the results will be similar to those I have
observed...





Re: [sqlite] performance statistics

2006-03-01 Thread drh
[EMAIL PROTECTED] wrote:
> 
> I am currently investigating porting my project from postgres to SQLite due
> to anticipated performance issues
>

I do not thing speed should really be the prime consideration
here.  PostgreSQL and SQLite solve very different problems.
I think you should choose the system that is the best map to
the problem you are trying to solve.

PostgreSQL is designed to support a large number of clients
distributed across multiple machines and accessing a relatively
large data store that is in a fixed location.  PostgreSQL is
designed to replace Oracle.

SQLite is designed to support a smaller number of clients
all located on the same host computer and accessing a portable
data store of only a few dozen gigabytes which is eaily copied
or moved.  SQLite is designed to replace fopen().

Both SQLite and PostgreSQL can be used to solve problems outside
their primary focus.  And so a high-end use of SQLite will
certainly overlap a low-end use of PostgreSQL.  But you will 
be happiest if you will use them both for what they were
originally designed for.

If you give us some more clues about what your requirements
are we can give you better guidance about which database might
be the best choice.

--
D. Richard Hipp   <[EMAIL PROTECTED]>



Re: [sqlite] performance statistics

2006-03-01 Thread Jay Sprenkle
> All -
>
> I am currently investigating porting my project from postgres to SQLite due
> to anticipated performance issues (we will have to start handling lots more
> data).  My initial speed testing of handling the expanded amount data has
> suggested that the postgres performance will be unacceptable.  I'm
> convinced that SQLite will solve my performance issues, however, the speed
> comparison data found on the SQLite site (http://www.sqlite.org/speed.html)
> is old.  This is the type of data I need, but I'd like to have more recent
> data to present to my manager, if it is available.  Can anybody point me
> anywhere that may have similar but more recent data?

This might be valuable for you:
http://sqlite.phxsoftware.com/forums/9/ShowForum.aspx