RE: [sqlite] SQLite performance with mid-size databases

2004-06-15 Thread Ionut Filip
Hi Richard,

Here is a tip to speed up the insert operations: use prepared statements
instead of plain INSERTs. If you have to insert more than one record with
the same format the performance increase is significant.

Transactions also speeds up db operations, there are more detailed topics
about this on the list.

> The speed results on the website [...]
Here I think you need to make sure the concurrency access to the DB is not
a bottleneck.

Ionut Filip

PS: You can also try to post a sample (partial) schema of your database.


-Original Message-
From: Richard Kuo [mailto:[EMAIL PROTECTED]
Sent: Wednesday, June 16, 2004 7:04 AM
To: [EMAIL PROTECTED]
Subject: [sqlite] SQLite performance with mid-size databases


Hi.  We are using SQLite to store and retrieve data rows where each
row is roughly 2K total in size and in a table of 15 columns.  The total
size of the database ranges from 100-300 MB.
 
The problem we are seeing is that query and insert performance is
unusually bad and scales up linearly with database size.  Compared to MS
Access, the query times are several times slower.  Frankly I was a bit
shocked at this considering that most people seem to think the
performance is good. However, I don't see anything that we are doing
wrong...we query the rows we want only by rowid.  I'm very puzzled that
this hasn't come up a lot in my searches of the mailing list, but
perhaps the slower query times aren't a concern for many of the
applications using SQLite.
 
Empirically speaking, we display our data in a scrolling 2
dimensional grid format.  With MS access, this grid responds
instantaneously when moving through the grid.  With SQLite, there is
very noticable stalling and lag and the disk i/o is higher than MS
Access by roughly a factor of 10.
 
I suppose I am looking to see if anyone is seeing the same results
that I am seeing, and wondering if this is known and expected to be the
case.  The speed results on the website seem way off to me or must be so
skewed towards a small dataset that they do not apply in a real world
scenario.  I would also like to state that I am very impressed with the
simplicity of SQLite, which is rare to find these days.  It was very
easy to get up and running.  I'm just having trouble getting past the
performance issues.  Any explanation would be helpful.
 
Richard Kuo


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[sqlite] SQLite performance with mid-size databases

2004-06-15 Thread Richard Kuo
Hi.  We are using SQLite to store and retrieve data rows where each
row is roughly 2K total in size and in a table of 15 columns.  The total
size of the database ranges from 100-300 MB.
 
The problem we are seeing is that query and insert performance is
unusually bad and scales up linearly with database size.  Compared to MS
Access, the query times are several times slower.  Frankly I was a bit
shocked at this considering that most people seem to think the
performance is good. However, I don't see anything that we are doing
wrong...we query the rows we want only by rowid.  I'm very puzzled that
this hasn't come up a lot in my searches of the mailing list, but
perhaps the slower query times aren't a concern for many of the
applications using SQLite.
 
Empirically speaking, we display our data in a scrolling 2
dimensional grid format.  With MS access, this grid responds
instantaneously when moving through the grid.  With SQLite, there is
very noticable stalling and lag and the disk i/o is higher than MS
Access by roughly a factor of 10.
 
I suppose I am looking to see if anyone is seeing the same results
that I am seeing, and wondering if this is known and expected to be the
case.  The speed results on the website seem way off to me or must be so
skewed towards a small dataset that they do not apply in a real world
scenario.  I would also like to state that I am very impressed with the
simplicity of SQLite, which is rare to find these days.  It was very
easy to get up and running.  I'm just having trouble getting past the
performance issues.  Any explanation would be helpful.
 
Richard Kuo


Re: [sqlite] create trigger before commit...

2004-06-15 Thread D. Richard Hipp
[EMAIL PROTECTED] wrote:
So, did the BEFORE COMMIT get implimented? If so, is it stable/usable? I 
don't see it on the sqlite language page. If not, can anyone think of an 
alternative way to do this that doesn't require a table-scan of 
TransactionEntryTable each time a new transaction is added?

There is an experimenal C API to do this.  sqlite_commit_hook().  It was added
to version 2.8.12.  (check-in [1179] on 2004-Jan-15.)  The only documentation
that I am aware of is the comments on the code.
--
D. Richard Hipp -- [EMAIL PROTECTED] -- 704.948.4565
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


[sqlite] create trigger before commit...

2004-06-15 Thread ben . carlyle
G'day,

I seem to recall (but cannot find) an email on this mailing list sent by 
drh quite sometime ago regarding triggers before commits. As I recall, he 
was battling with some kind of problem for which a trigger that executed 
immediately before the commit was executed seemed to be the best solution.

I'm currently working on/planning/thinking about an accounting system. The 
system essentially consists of a transaction table, an account table, and 
a transaction entry table as follows:
CREATE TABLE AccountTable (AccountId INTEGER PRIMARY KEY, name);
CREATE TABLE TransactionTable (TransactionId INTEGER PRIMARY KEY, date, 
memo);
CREATE TABLE TransactionEntryTable (TransactionId, AccountId, amount);

I'm wanting to put some foreign key constraints in, and I think I can do 
that with triggers, but the tricky constraint of the data model is that 
sum(TransactionEntryTable.amount) must always equal zero for each 
TransactionId (this is double-entry bookkeeping).

My thinking is that I can check this with something approximating the 
following pseudo-triggers and table:
CREATE TABLE DirtyTransactions (TransactionId UNIQUE); 
CREATE TRIGGER addDirtyTransaction BEFORE INSERT OR UPDATE OR DELETE ON 
TransactionEntryTable
BEGIN
INSERT OR REPLACE INTO DirtyTransactions VALUES(old.TranactionId)
INSERT OR REPLACE INTO DirtyTransactions VALUES(new.TranactionId)
END;
CREATE TRIGGER checkDirtyTransactions BEFORE COMMIT
BEGIN
SELECT RAISE(Abort) WHERE (SELECT sum(Amount) AS s FROM 
DirtyTransactions LEFT INNER JOIN TransactionEntryTable WHERE s<>0 GROUP 
BY TransactionId);
ERASE FROM DirtyTransactions;
END;

So, did the BEFORE COMMIT get implimented? If so, is it stable/usable? I 
don't see it on the sqlite language page. If not, can anyone think of an 
alternative way to do this that doesn't require a table-scan of 
TransactionEntryTable each time a new transaction is added?

Benjamin.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [sqlite] what's the character limit of a single sql in sqlite?

2004-06-15 Thread Dennis Cote

- Original Message - 
From: "Xuer" <[EMAIL PROTECTED]>
> could i use a long,long sql sentence to insert many items into a table
> once without limit?
> or there IS a limit so i have to accomplish it by doing it for multi
times?

Hi,

There is no limit using the current sqlite_exec() API function. It will
compile and execute all the SQL statements in the string pointed to by the
sql parameter one after the other.

I use this to initialize a new empty database by executing all the table,
index, and trigger creation statements and then inserting fixed data into
some of the tables. This schema information is kept in a text file which is
read into memory and then executed with one call. With all the statements
executed in a single transaction it is very fast.

I have noticed that SQLite queries execute somewhat slower after this has
been done however. I have no idea why. My work around is simply to close the
database file after it is initialized, and then immediately reopen it. Now
all the queries run at full speed.

I hope this helps.

Dennis Cote


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: [sqlite] Solving the "busy" problem with many reads and a single write

2004-06-15 Thread Wempa, Kristofer \(Kris\), ALABS

>> Our tool to query the database is written in shell and invokes the
>> sqlite command.  Rather than re-write it in C to go through the API,
I
>> figured that I'd try changing the read locks to blocking in order to
try
>> to save time.  Reads ONLY happen through this shell tool.  Writes
only
>> happen through an API call.
>>

>In your ~/.sqliterc file, add this line:
>
> .timeout 5000
>  -- timeout value in milliseconds.
> Adjust to your needs.
>
>Or you can put the ".timeout" line directly in your shell commands.

Thanks.  Both solutions (the timeout and the re-compile with blocking
read lock) worked just fine.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [sqlite] REPLACE statement

2004-06-15 Thread Christian Smith

Just a stab in the dark suggestion. Not fully tested (tried simple noddy
test just now.


How about a pre-insert trigger, which will delete entries in Playlist with
the same key and which have timestamps less than the new row. Than, you
can have a constraint that if the primary key is duplicated, simply drop
the new row.

Cases:
1. No existing row.
   - Trigger will delete nothing.
   - New row will be inserted.
2. Existing older row.
   - Trigger will delete existing row (timestamp predates our new row.)
   - New row will be inserted.
3. Existing newer row.
   - Trigger will delete nothing.
   - New row will be discarded as it clashes with existing primary key.

So:

-- Create table with ignore constraint on primary key conflict
CREATE TABLE Playlist
(
   PlaylistID integer PRIMARY KEY ON CONFLICT IGNORE,
   ComputerID integer,
   NetworkID  integer,
   PlaylistNuminteger,
   PlaylistItems  text,
   LastUpdatedtext
);

-- Create new trigger to delete older row
CREATE TRIGGER Playlist_InsertCheck
BEFORE INSERT ON Playlist
FOR EACH ROW
BEGIN
  DELETE FROM Playlist
WHERE PlaylistID = NEW.PlaylistID
AND LastUpdated < NEW.LastUpdated;
END;

-- Index (PlaylistID,LastUpdated) to speed to DELETE in trigger
-- CREATE INDEX Playlist_IDTimeIdx ON Playlist(PlaylistID,LastUpdated);


Quick test:
sqlite> insert into playlist(PlaylistID,LastUpdated,PlaylistItems) values (0,0,0);
sqlite> select * from playlist;
00|0
sqlite> insert into playlist(PlaylistID,LastUpdated,PlaylistItems) values (0,0,1);
sqlite> select * from playlist;
00|0
sqlite> insert into playlist(PlaylistID,LastUpdated,PlaylistItems) values (0,1,1);
sqlite> select * from playlist;
01|1
sqlite> insert into playlist(PlaylistID,LastUpdated,PlaylistItems) values (0,0,1);
sqlite> select * from playlist;
01|1
sqlite>

Caveats:
Assume LastUpdated comparison will be accurate. Not sure how dates are
compared on SQLite (check) and make sure the clocks are roughly in sync.


On Mon, 14 Jun 2004, Richard Boehme wrote:

>In my database, I am going to be inserting many rows at a time. There
>might be duplicate rows in the database which I would not know the
>primary key for. There are several databases on several computers
>merging into one database which the first ones only occasionally
>contact. The schema so far is this:
>
>CREATE TABLE Playlist
>(
>   PlaylistID integer PRIMARY KEY,
>   ComputerID integer,
>   NetworkID  integer,
>   PlaylistNuminteger,
>   PlaylistItems  text,
>   LastUpdatedtext
>);
>
>It would need to replace any  playlist with the same PlaylistItems and
>NetworkID, for instance. In otherwords, my criterion for if it gets
>updated rather than inserted is if two or three keys match exactly. Is
>there a good way to do this with a lot of transactions? I'm not sure how
>flexible the REPLACE command or triggers would be for this. And as I
>mentioned, it might be on a couple thousnad rows at a time, so I would
>like to have it in a transaction, if possible.
>
>Thanks for the help.
>
>Richard Boehme
>
>-
>To unsubscribe, e-mail: [EMAIL PROTECTED]
>For additional commands, e-mail: [EMAIL PROTECTED]
>

-- 
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [sqlite] what's the character limit of a single sql in sqlite?

2004-06-15 Thread Alex K
There is a limit of 1MB per row. See description of the SQLITE_TOOBIG error
below. So if you have a SQL string which size is less than 1MB, it definitly
works. If your single INSERT is more than 1 MB, it definitly does not work.
But you can change this limit in source code and recompile, if you want to
store large text-files in a single row.

SQLITE_TOOBIG

SQLite will not store more than about 1 megabyte of data in a single row
of a single table. If you attempt to store more than 1 megabyte in a single
row, this is the return code you get. 


> could i use a long,long sql sentence to insert many items into a table
> once without limit?
> or there IS a limit so i have to accomplish it by doing it for multi
> times?
> 
> thanks in advance.:)
> 
> 
> -
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

-- 
+++ Jetzt WLAN-Router für alle DSL-Einsteiger und Wechsler +++
GMX DSL-Powertarife zudem 3 Monate gratis* http://www.gmx.net/dsl


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]