Re: [sqlite] speedtest result is obsolete

2005-02-09 Thread Dan Keeley
: [sqlite] speedtest result is obsolete Date: Tue, 8 Feb 2005 18:27:21 + (GMT) On Tue, 8 Feb 2005, Chris Schirlinger wrote: I did a small test to see if performance was linear with time. I wanted to make sure it was suitable for my application. It seems with both indexed and unindexed tables

Re: [sqlite] speedtest result is obsolete

2005-02-09 Thread Chris Schirlinger
I think you people are missing the point here, the performance increase you're seeing is all down to OS caching and will vary across different ports. It's nothing to do with sqlite, and will affect every package. Therefore the only way to fairly compare mysql/postgress/sqlite is to make

Re: [sqlite] speedtest result is obsolete

2005-02-08 Thread Christian Smith
On Tue, 8 Feb 2005, Chris Schirlinger wrote: I did a small test to see if performance was linear with time. I wanted to make sure it was suitable for my application. It seems with both indexed and unindexed tables it doesn't take significantly longer to do the 1,000,000th insert than it did

Re: [sqlite] speedtest result is obsolete

2005-02-08 Thread Chris Schirlinger
Doing a keyed search is no guarantee that you won't touch *every* single page in the table, if the rows are inserted in random order. Try this: ...cut... Assuming key is the key field you want, the records will be inserted into wibble in key order. Selecting by key will then touch the least

Re: [sqlite] speedtest result is obsolete

2005-02-08 Thread D. Richard Hipp
On Wed, 2005-02-09 at 09:30 +1100, Chris Schirlinger wrote: Doing a keyed search is no guarantee that you won't touch *every* single page in the table, if the rows are inserted in random order. Try this: ...cut... Assuming key is the key field you want, the records will be inserted into

Re: [sqlite] speedtest result is obsolete

2005-02-08 Thread Chris Schirlinger
Another trick you can pull is to create an index that contains every column in the table with the cluster index columns occuring first. That will double the size of your database. But when SQLite can get all of the information it needs out of the index it does not bother to consult the

[sqlite] speedtest result is obsolete

2005-02-07 Thread Yasuo Ohgaki
Hi, The speed test result is obsolete http://sqlite.org/speed.html Here is my results. http://www.ohgaki.net/download/speedtest.html http://www.ohgaki.net/download/speedtest-pgsql-nosync.html The later one is without fsync for PostgreSQL. All dbmses are tested with default rpm package

Re: [sqlite] speedtest result is obsolete

2005-02-07 Thread Clay Dowling
Yasuo Ohgaki said: http://www.ohgaki.net/download/speedtest.html http://www.ohgaki.net/download/speedtest-pgsql-nosync.html The tests were very interesting. Based on what I see in those reports, any one of the three should be suitable for most tasks, with the engine chosen based on the

Re: [sqlite] speedtest result is obsolete

2005-02-07 Thread Hugh Gibson
I would be interested to know the results for very large data sets. Indications on the list have been that performance suffers when the number of records gets very big ( 1 million), possibly due to using an internal sort. Hugh

Re: [sqlite] speedtest result is obsolete

2005-02-07 Thread Jay
I did a small test to see if performance was linear with time. I wanted to make sure it was suitable for my application. It seems with both indexed and unindexed tables it doesn't take significantly longer to do the 1,000,000th insert than it did the first. =

Re: [sqlite] speedtest result is obsolete

2005-02-07 Thread Christopher Petrilli
On Mon, 7 Feb 2005 10:09:58 -0800 (PST), Jay [EMAIL PROTECTED] wrote: I did a small test to see if performance was linear with time. I wanted to make sure it was suitable for my application. It seems with both indexed and unindexed tables it doesn't take significantly longer to do the

Re: [sqlite] speedtest result is obsolete

2005-02-07 Thread Chris Schirlinger
I would be interested to know the results for very large data sets. Indications on the list have been that performance suffers when the number of records gets very big ( 1 million), possibly due to using an internal sort. I must say, with a 2+ million row data set, we aren't getting