AW: [sqlite] Does SQLite support modifying date through views?

2007-12-09 Thread Michael Ruck
It does not, but you can attach triggers to a view to achieve the same
effect. 

> -Ursprüngliche Nachricht-
> Von: Robert Smith [mailto:[EMAIL PROTECTED] 
> Gesendet: Sonntag, 9. Dezember 2007 08:31
> An: sqlite-users@sqlite.org
> Betreff: [sqlite] Does SQLite support modifying date through views?
> 
> 
> I am trying to implement an embedded application using 
> SQLite. I need to
> modify data through views. The update, insert, or delete 
> operations may
> refer to a computed column or a built-in function in a view. 
> I experienced
> problems. I am wondering if SQLite can support features of 
> modifying date
> through views.
> 
> Thanks,
> Robert
> -- 
> View this message in context: 
> http://www.nabble.com/Does-SQLite-support-modifying-date-throu
gh-views--tp14236459p14236459.html
> Sent from the SQLite mailing list archive at Nabble.com.
> 
> 
> --
> ---
> To unsubscribe, send email to [EMAIL PROTECTED]
> --
> ---
> 
> 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: AW: [sqlite] Transactional DDL

2007-11-27 Thread Dan


On Nov 27, 2007, at 10:27 PM, Michael Ruck wrote:


Are all CREATE ... statements transactional or is only CREATE TABLE
transactional?


All of the CREATE and DROP statements work properly within
transactions. If the containing transaction is rolled back, the CREATE
or DROP is rolled back along with everything else.

Dan.



Mike


-Ursprüngliche Nachricht-
Von: Dan [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 27. November 2007 15:59
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] Transactional DDL


On Nov 27, 2007, at 9:47 PM, Igor Sereda wrote:


I noticed that CREATE TABLE works well within a transaction, which
was a pleasant surprise. I can create a table and insert some rows
in it, all quite ACIDly - wow!

My question is, is that a declared contract or just a peculiarity
that may disappear in future versions? I couldn't find any

specs of

that behavior in documentation. If I missed it, please point me to
the URL. If there are no mentions of that in docs, well, it's
probably worth mentioning.


It's a supported feature.


Also, which statements are not transactional? VACUUM is obviously
one of them, are there any other?


Some of the pragma statements. Can't think of anything else.

Dan.


--
---
To unsubscribe, send email to [EMAIL PROTECTED]
--
---





-- 
---

To unsubscribe, send email to [EMAIL PROTECTED]
-- 
---





-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] Transactional DDL

2007-11-27 Thread Michael Ruck
Are all CREATE ... statements transactional or is only CREATE TABLE
transactional?

Mike 

> -Ursprüngliche Nachricht-
> Von: Dan [mailto:[EMAIL PROTECTED] 
> Gesendet: Dienstag, 27. November 2007 15:59
> An: sqlite-users@sqlite.org
> Betreff: Re: [sqlite] Transactional DDL
> 
> 
> On Nov 27, 2007, at 9:47 PM, Igor Sereda wrote:
> 
> > I noticed that CREATE TABLE works well within a transaction, which  
> > was a pleasant surprise. I can create a table and insert some rows  
> > in it, all quite ACIDly - wow!
> >
> > My question is, is that a declared contract or just a peculiarity  
> > that may disappear in future versions? I couldn't find any 
> specs of  
> > that behavior in documentation. If I missed it, please point me to  
> > the URL. If there are no mentions of that in docs, well, it's  
> > probably worth mentioning.
> 
> It's a supported feature.
> 
> > Also, which statements are not transactional? VACUUM is obviously  
> > one of them, are there any other?
> 
> Some of the pragma statements. Can't think of anything else.
> 
> Dan.
> 
> 
> --
> ---
> To unsubscribe, send email to [EMAIL PROTECTED]
> --
> ---
> 
> 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] Re: Performance tuning, and other (silly?) SQLitequestions.

2007-11-20 Thread Michael Ruck
I know that a natural join exists, but it is not automatic as
it seems to be in MySQL. 

> -Ursprüngliche Nachricht-
> Von: Dennis Cote [mailto:[EMAIL PROTECTED] 
> Gesendet: Dienstag, 20. November 2007 18:32
> An: sqlite-users@sqlite.org
> Betreff: Re: [sqlite] Re: Performance tuning, and other 
> (silly?) SQLitequestions.
> 
> Michael Ruck wrote:
> >>
> >> Ah.  I have been reading a PHP/MySQL book, that I thought 
> said a MySQL
> >> server would see the common column names and automagically 
> join the 2.
> >> Either I misremember what the book said (it's not with me 
> >> here), or this
> >> is a feature of MySQL, not present in SQLite.  Anyway, what 
> >> you suggest
> >> works just fine.
> >> 
> >
> > SQLite does not implement this feature. Its not in the SQL 
> standard AFAIK.
> >
> >   
> >   
> 
> This feature *is* part of the SQL standard and is implemented 
> by SQLite. 
> It is called a NATURAL JOIN.
> 
> select * from a_table natural join  b_table;
> 
> This will select all rows where all fields with the same name 
> in the two 
> tables are equal. See http://en.wikipedia.org/wiki/Join_(SQL) 
> for more 
> details. Note, only one column, with the same name as the matching 
> columns from the two tables, is generated in the result (and 
> this column 
> is not considered to be from either table in standard SQL).
> 
> HTH
> Dennis Cote
> 
> 
> 
> 
> 
> --
> ---
> To unsubscribe, send email to [EMAIL PROTECTED]
> --
> ---
> 
> 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: AW: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid()

2007-10-30 Thread Daniel Önnerby
Why are you using the INSERT OR IGNORE? If you read the 
http://www.sqlite.org/lang_conflict.html it states that the IGNORE will 
not return any errors. Isn't the default behavior INSERT OR ABORT (or 
just plain INSERT) what you are looking for?
The default INSERT will return an error (SQLITE_CONSTRAINT) if there is 
a conflict. If you get a conflict then do not trust the 
sqlite3_last_insert_rowid since (I guess) it will return the last 
successful insert rowid.


Best regards
Daniel

Michael Ruck wrote:

I'm not blaming anyone. I just think it should be mentioned in the docs.

Mike 


-Ursprüngliche Nachricht-
Von: John Stanton [mailto:[EMAIL PROTECTED] 
Gesendet: Donnerstag, 29. November 2007 20:12

An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid()

As has been carefully explained by several people, it is reliable.  You 
just did not think through your application.  You could make an 
extension to Sqlite and implement an sqlite3_last_insert_or_ignore_id 
function, but to blithely assume that you can use last_insert_id with 
INSERT OR IGNORE is not logical and to blame others for your oversight 
is not helpful.


Michael Ruck wrote:
  

I don't get an error code. So how should I decide if I should call
sqlite3_last_insert_rowid() or not? :) That's the problem - I don't have


any
  

indication if an insert
was actually performed or if it was simply ignored - thus I don't have any
possibility to decide if the call is valid or not. This makes the OR


IGNORE
  

clause or the sqlite3_last_insert_rowid() function useless for *my
purposes*. I would have never pursued this path in tests, if I would've
known beforehand that it is not reliable if used with ON CONFLICT clauses.

Mike

-Ursprüngliche Nachricht-
Von: Odekirk, Shawn [mailto:[EMAIL PROTECTED] 
Gesendet: Montag, 29. Oktober 2007 14:04

An: sqlite-users@sqlite.org
Betreff: RE: AW: AW: [sqlite] INSERT OR IGNORE and
sqlite3_last_insert_rowid()

The sqlite3_last_insert_rowid function is completely, 100% reliable in


your
  

scenario.  The problem is that in your scenario you shouldn't be calling
that function.
The function is called sqlite3_last_insert_rowid, not
sqlite3_last_insert_or_ignore_rowid, and not
sqlite3_last_insert_or_fail_rowid.  It makes perfect sense that it returns
the row id of the last row inserted successfully.  This function should


only
  

be called after a successful insert.  In your scenario you have not
performed a successful insert.  There is no reason to think that the
function will return a meaningful row id after a failed insert attempt.
I hope my response was not too harsh.  You seem so adamant that there is a
problem with the function or documentation, and I completely disagree.

Shawn

-Original Message-
From: Michael Ruck [mailto:[EMAIL PROTECTED]
Sent: Sunday, October 28, 2007 12:55 PM
To: sqlite-users@sqlite.org
Subject: AW: AW: AW: [sqlite] INSERT OR IGNORE and
sqlite3_last_insert_rowid()

I'd suggest putting this into the documentation of
sqlite3_last_insert_rowid(), that the call is not reliable in scenarios


such
  
as this one. 


-Ursprüngliche Nachricht-
Von: D. Richard Hipp [mailto:[EMAIL PROTECTED]
Gesendet: Sonntag, 28. Oktober 2007 17:48
An: sqlite-users@sqlite.org
Betreff: Re: AW: AW: [sqlite] INSERT OR IGNORE and
sqlite3_last_insert_rowid()


On Oct 28, 2007, at 10:59 AM, Michael Ruck wrote:

  

Yes, I am well aware of this possibility as I've written in my  
initial mail.

It just doesn't fit with the
description of sqlite3_last_insert_rowid() in my understanding. I  
think this

is a bug - either in the documentation
or in the implementation. sqlite3_last_insert_rowid() should return  
the

correct id, no matter what and it doesn't.


  

Consider this scenario:

 CREATE TABLE ex1(id INTEGER PRIMARY KEY, b UNIQUE, c UNIQUE);
 INSERT INTO ex1 VALUES(1,1,1);
 INSERT INTO ex1 VALUES(2,2,2);
 INSERT INTO ex1 VALUES(3,3,3);

Now you do your INSERT OR IGNORE:

 INSERT OR IGNORE INTO ex1 VALUES(1,2,3);

Three different constraints fail, one for each of three different
rows.  So if sqlite3_last_insert_rowid() were to operate as you
suggest and return the rowid of the failed insert, when rowid
would it return?  1, 2, or 3?


D. Richard Hipp
[EMAIL PROTECTED]








  

-
To unsubscribe, send email to [EMAIL PROTECTED]




  

-







  

-
To unsubscribe, send email to [EMAIL PROTECTED]




  

-








  

-
To unsubscribe, send email to [EMAIL PROTECTED

AW: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid()

2007-10-30 Thread Michael Ruck
I'm not blaming anyone. I just think it should be mentioned in the docs.

Mike 

-Ursprüngliche Nachricht-
Von: John Stanton [mailto:[EMAIL PROTECTED] 
Gesendet: Donnerstag, 29. November 2007 20:12
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid()

As has been carefully explained by several people, it is reliable.  You 
just did not think through your application.  You could make an 
extension to Sqlite and implement an sqlite3_last_insert_or_ignore_id 
function, but to blithely assume that you can use last_insert_id with 
INSERT OR IGNORE is not logical and to blame others for your oversight 
is not helpful.

Michael Ruck wrote:
> I don't get an error code. So how should I decide if I should call
> sqlite3_last_insert_rowid() or not? :) That's the problem - I don't have
any
> indication if an insert
> was actually performed or if it was simply ignored - thus I don't have any
> possibility to decide if the call is valid or not. This makes the OR
IGNORE
> clause or the sqlite3_last_insert_rowid() function useless for *my
> purposes*. I would have never pursued this path in tests, if I would've
> known beforehand that it is not reliable if used with ON CONFLICT clauses.
>
> Mike
>
> -Ursprüngliche Nachricht-
> Von: Odekirk, Shawn [mailto:[EMAIL PROTECTED] 
> Gesendet: Montag, 29. Oktober 2007 14:04
> An: sqlite-users@sqlite.org
> Betreff: RE: AW: AW: [sqlite] INSERT OR IGNORE and
> sqlite3_last_insert_rowid()
>
> The sqlite3_last_insert_rowid function is completely, 100% reliable in
your
> scenario.  The problem is that in your scenario you shouldn't be calling
> that function.
> The function is called sqlite3_last_insert_rowid, not
> sqlite3_last_insert_or_ignore_rowid, and not
> sqlite3_last_insert_or_fail_rowid.  It makes perfect sense that it returns
> the row id of the last row inserted successfully.  This function should
only
> be called after a successful insert.  In your scenario you have not
> performed a successful insert.  There is no reason to think that the
> function will return a meaningful row id after a failed insert attempt.
> I hope my response was not too harsh.  You seem so adamant that there is a
> problem with the function or documentation, and I completely disagree.
>
> Shawn
>
> -Original Message-
> From: Michael Ruck [mailto:[EMAIL PROTECTED]
> Sent: Sunday, October 28, 2007 12:55 PM
> To: sqlite-users@sqlite.org
> Subject: AW: AW: AW: [sqlite] INSERT OR IGNORE and
> sqlite3_last_insert_rowid()
>
> I'd suggest putting this into the documentation of
> sqlite3_last_insert_rowid(), that the call is not reliable in scenarios
such
> as this one. 
>
> -Ursprüngliche Nachricht-
> Von: D. Richard Hipp [mailto:[EMAIL PROTECTED]
> Gesendet: Sonntag, 28. Oktober 2007 17:48
> An: sqlite-users@sqlite.org
> Betreff: Re: AW: AW: [sqlite] INSERT OR IGNORE and
> sqlite3_last_insert_rowid()
>
>
> On Oct 28, 2007, at 10:59 AM, Michael Ruck wrote:
>
>   
>> Yes, I am well aware of this possibility as I've written in my  
>> initial mail.
>> It just doesn't fit with the
>> description of sqlite3_last_insert_rowid() in my understanding. I  
>> think this
>> is a bug - either in the documentation
>> or in the implementation. sqlite3_last_insert_rowid() should return  
>> the
>> correct id, no matter what and it doesn't.
>>
>> 
>
> Consider this scenario:
>
>  CREATE TABLE ex1(id INTEGER PRIMARY KEY, b UNIQUE, c UNIQUE);
>  INSERT INTO ex1 VALUES(1,1,1);
>  INSERT INTO ex1 VALUES(2,2,2);
>  INSERT INTO ex1 VALUES(3,3,3);
>
> Now you do your INSERT OR IGNORE:
>
>  INSERT OR IGNORE INTO ex1 VALUES(1,2,3);
>
> Three different constraints fail, one for each of three different
> rows.  So if sqlite3_last_insert_rowid() were to operate as you
> suggest and return the rowid of the failed insert, when rowid
> would it return?  1, 2, or 3?
>
>
> D. Richard Hipp
> [EMAIL PROTECTED]
>
>
>
>
>

> -
> To unsubscribe, send email to [EMAIL PROTECTED]
>

> -
>
>
>
>

> -
> To unsubscribe, send email to [EMAIL PROTECTED]
>

> -
>
>
>
>
>

> -
> To unsubscribe, send email to [EMAIL PROTECTED]
>
---

AW: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid()

2007-10-29 Thread Michael Ruck
This does look like a solution indeed. I'll try this one later.

Thank you!
Mike 

-Ursprüngliche Nachricht-
Von: Simon Davies [mailto:[EMAIL PROTECTED] 
Gesendet: Montag, 29. Oktober 2007 16:40
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid()

On 29/10/2007, Odekirk, Shawn <[EMAIL PROTECTED]> wrote:
> Adding "primary key" to column "a" results in the behavior I think you
> were first expecting.
>
> sqlite> create table tmp (a integer primary key, b integer);
> sqlite> create unique index tmpIndex on tmp (a, b);
> sqlite> insert into tmp values (1, 1);
> sqlite> insert into tmp values (2, 2);
> sqlite> select last_insert_rowid();
> 2
> sqlite> insert or replace into tmp values (1, 1);
> sqlite> select last_insert_rowid();
> 1
> sqlite> select * from tmp;
> 1|1
> 2|2
>
> I wonder if Michael could use OR REPLACE instead of OR IGNORE to solve
> his problem.
>
> Shawn
>

That was the result I was seeking (as a potential answer to Michael's
problem).

Looking at Michael's original post, he has declared an INTEGER PRIMARY
KEY column, so "OR REPLACE" instead of "OR IGNORE" could be the
solution as you suggest.

Rgds,
Simon


-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: AW: AW: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid()

2007-10-29 Thread Odekirk, Shawn
The sqlite3_last_insert_rowid function is completely, 100% reliable in your 
scenario.  The problem is that in your scenario you shouldn't be calling that 
function.
The function is called sqlite3_last_insert_rowid, not 
sqlite3_last_insert_or_ignore_rowid, and not sqlite3_last_insert_or_fail_rowid. 
 It makes perfect sense that it returns the row id of the last row inserted 
successfully.  This function should only be called after a successful insert.  
In your scenario you have not performed a successful insert.  There is no 
reason to think that the function will return a meaningful row id after a 
failed insert attempt.
I hope my response was not too harsh.  You seem so adamant that there is a 
problem with the function or documentation, and I completely disagree.

Shawn

-Original Message-
From: Michael Ruck [mailto:[EMAIL PROTECTED] 
Sent: Sunday, October 28, 2007 12:55 PM
To: sqlite-users@sqlite.org
Subject: AW: AW: AW: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid()

I'd suggest putting this into the documentation of
sqlite3_last_insert_rowid(), that
the call is not reliable in scenarios such as this one. 

-Ursprüngliche Nachricht-
Von: D. Richard Hipp [mailto:[EMAIL PROTECTED] 
Gesendet: Sonntag, 28. Oktober 2007 17:48
An: sqlite-users@sqlite.org
Betreff: Re: AW: AW: [sqlite] INSERT OR IGNORE and
sqlite3_last_insert_rowid()


On Oct 28, 2007, at 10:59 AM, Michael Ruck wrote:

> Yes, I am well aware of this possibility as I've written in my  
> initial mail.
> It just doesn't fit with the
> description of sqlite3_last_insert_rowid() in my understanding. I  
> think this
> is a bug - either in the documentation
> or in the implementation. sqlite3_last_insert_rowid() should return  
> the
> correct id, no matter what and it doesn't.
>

Consider this scenario:

 CREATE TABLE ex1(id INTEGER PRIMARY KEY, b UNIQUE, c UNIQUE);
 INSERT INTO ex1 VALUES(1,1,1);
 INSERT INTO ex1 VALUES(2,2,2);
 INSERT INTO ex1 VALUES(3,3,3);

Now you do your INSERT OR IGNORE:

 INSERT OR IGNORE INTO ex1 VALUES(1,2,3);

Three different constraints fail, one for each of three different
rows.  So if sqlite3_last_insert_rowid() were to operate as you
suggest and return the rowid of the failed insert, when rowid
would it return?  1, 2, or 3?


D. Richard Hipp
[EMAIL PROTECTED]





-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: AW: AW: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid()

2007-10-28 Thread Trevor Talbot
On 10/28/07, Michael Ruck <[EMAIL PROTECTED]> wrote:
> I'd suggest putting this into the documentation of
> sqlite3_last_insert_rowid(), that
> the call is not reliable in scenarios such as this one.

It might be appropriate to just stress it only works for successful
INSERTs.  I'd just assumed that was true anyway.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: AW: AW: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid()

2007-10-28 Thread Michael Ruck
I'd suggest putting this into the documentation of
sqlite3_last_insert_rowid(), that
the call is not reliable in scenarios such as this one. 

-Ursprüngliche Nachricht-
Von: D. Richard Hipp [mailto:[EMAIL PROTECTED] 
Gesendet: Sonntag, 28. Oktober 2007 17:48
An: sqlite-users@sqlite.org
Betreff: Re: AW: AW: [sqlite] INSERT OR IGNORE and
sqlite3_last_insert_rowid()


On Oct 28, 2007, at 10:59 AM, Michael Ruck wrote:

> Yes, I am well aware of this possibility as I've written in my  
> initial mail.
> It just doesn't fit with the
> description of sqlite3_last_insert_rowid() in my understanding. I  
> think this
> is a bug - either in the documentation
> or in the implementation. sqlite3_last_insert_rowid() should return  
> the
> correct id, no matter what and it doesn't.
>

Consider this scenario:

 CREATE TABLE ex1(id INTEGER PRIMARY KEY, b UNIQUE, c UNIQUE);
 INSERT INTO ex1 VALUES(1,1,1);
 INSERT INTO ex1 VALUES(2,2,2);
 INSERT INTO ex1 VALUES(3,3,3);

Now you do your INSERT OR IGNORE:

 INSERT OR IGNORE INTO ex1 VALUES(1,2,3);

Three different constraints fail, one for each of three different
rows.  So if sqlite3_last_insert_rowid() were to operate as you
suggest and return the rowid of the failed insert, when rowid
would it return?  1, 2, or 3?


D. Richard Hipp
[EMAIL PROTECTED]





-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: AW: AW: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid()

2007-10-28 Thread Michael Ruck
Good point.

Thanks.

-Ursprüngliche Nachricht-
Von: D. Richard Hipp [mailto:[EMAIL PROTECTED] 
Gesendet: Sonntag, 28. Oktober 2007 17:48
An: sqlite-users@sqlite.org
Betreff: Re: AW: AW: [sqlite] INSERT OR IGNORE and
sqlite3_last_insert_rowid()


On Oct 28, 2007, at 10:59 AM, Michael Ruck wrote:

> Yes, I am well aware of this possibility as I've written in my  
> initial mail.
> It just doesn't fit with the
> description of sqlite3_last_insert_rowid() in my understanding. I  
> think this
> is a bug - either in the documentation
> or in the implementation. sqlite3_last_insert_rowid() should return  
> the
> correct id, no matter what and it doesn't.
>

Consider this scenario:

 CREATE TABLE ex1(id INTEGER PRIMARY KEY, b UNIQUE, c UNIQUE);
 INSERT INTO ex1 VALUES(1,1,1);
 INSERT INTO ex1 VALUES(2,2,2);
 INSERT INTO ex1 VALUES(3,3,3);

Now you do your INSERT OR IGNORE:

 INSERT OR IGNORE INTO ex1 VALUES(1,2,3);

Three different constraints fail, one for each of three different
rows.  So if sqlite3_last_insert_rowid() were to operate as you
suggest and return the rowid of the failed insert, when rowid
would it return?  1, 2, or 3?


D. Richard Hipp
[EMAIL PROTECTED]





-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: AW: AW: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid()

2007-10-28 Thread D. Richard Hipp


On Oct 28, 2007, at 10:59 AM, Michael Ruck wrote:

Yes, I am well aware of this possibility as I've written in my  
initial mail.

It just doesn't fit with the
description of sqlite3_last_insert_rowid() in my understanding. I  
think this

is a bug - either in the documentation
or in the implementation. sqlite3_last_insert_rowid() should return  
the

correct id, no matter what and it doesn't.



Consider this scenario:

CREATE TABLE ex1(id INTEGER PRIMARY KEY, b UNIQUE, c UNIQUE);
INSERT INTO ex1 VALUES(1,1,1);
INSERT INTO ex1 VALUES(2,2,2);
INSERT INTO ex1 VALUES(3,3,3);

Now you do your INSERT OR IGNORE:

INSERT OR IGNORE INTO ex1 VALUES(1,2,3);

Three different constraints fail, one for each of three different
rows.  So if sqlite3_last_insert_rowid() were to operate as you
suggest and return the rowid of the failed insert, when rowid
would it return?  1, 2, or 3?


D. Richard Hipp
[EMAIL PROTECTED]




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: AW: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid()

2007-10-28 Thread Michael Ruck
Yes, I am well aware of this possibility as I've written in my initial mail.
It just doesn't fit with the 
description of sqlite3_last_insert_rowid() in my understanding. I think this
is a bug - either in the documentation
or in the implementation. sqlite3_last_insert_rowid() should return the
correct id, no matter what and it doesn't.

Since I have a bunch of tables of this structure I don't want to waste
memory/processor time just to retrieve the
rowid I should have gotten in the first place from the insert.

Mike

-Ursprüngliche Nachricht-
Von: Kees Nuyt [mailto:[EMAIL PROTECTED] 
Gesendet: Sonntag, 28. Oktober 2007 15:36
An: sqlite-users@sqlite.org
Betreff: Re: AW: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid()

[Default] On Sun, 28 Oct 2007 10:00:52 +0100, "Michael Ruck"
<[EMAIL PROTECTED]> wrote:

>Hi,
>
>I did specify UNIQUE for category. The id is also kept, so everything 
>is working
>*except* that I don't get the id of the record ignored from 
>sqlite3_last_insert_rowid().
>
>Mike

You could simply do a 
SELECT id FROM categories WHERE category = ''; to
retrieve the id.
After the (ignored) INSERT the database pages with the relevant parts of the
BTree for the UNIQUE index on category will still be in memory, so the
SELECT will be fast.

For even more speed you can prepare the SELECT statement during the init of
your program, and bind to the appropriate values every time you need it, so
it doesn't have to be parsed every time.

Regards,
--
  (  Kees Nuyt
  )
c[_]


-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: AW: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid()

2007-10-28 Thread Kees Nuyt
[Default] On Sun, 28 Oct 2007 10:00:52 +0100, "Michael Ruck"
<[EMAIL PROTECTED]> wrote:

>Hi,
>
>I did specify UNIQUE for category. The id is also kept, so everything is
>working 
>*except* that I don't get the id of the record ignored from
>sqlite3_last_insert_rowid().
>
>Mike

You could simply do a 
SELECT id FROM categories WHERE category = '';
to retrieve the id.
After the (ignored) INSERT the database pages with the relevant
parts of the BTree for the UNIQUE index on category will still
be in memory, so the SELECT will be fast.

For even more speed you can prepare the SELECT statement during
the init of your program, and bind to the appropriate values
every time you need it, so it doesn't have to be parsed every
time.

Regards,
-- 
  (  Kees Nuyt
  )
c[_]

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid()

2007-10-28 Thread Michael Ruck
Hi,

I did specify UNIQUE for category. The id is also kept, so everything is
working 
*except* that I don't get the id of the record ignored from
sqlite3_last_insert_rowid().

Mike

-Ursprüngliche Nachricht-
Von: Kees Nuyt [mailto:[EMAIL PROTECTED] 
Gesendet: Samstag, 27. Oktober 2007 23:45
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid()

[Default] On Sat, 27 Oct 2007 16:26:36 +0200, "Michael Ruck"
<[EMAIL PROTECTED]> wrote:

>Hi,
>
>I have a table of unique values in the following format:
>
>CREATE TABLE categories (id INTEGER PRIMARY KEY, category UNIQUE TEXT)
>
>I want inserts into this table to succeed, even though the corresponding
>entry already exists. So I use inserts in the following format:
>
>INSERT OR IGNORE INTO categories VALUES (NULL, ?)
>
>However, if I follow this successful execution with a call to
>sqlite3_last_insert_rowid() I don't get the rowid of row, which caused the
>insert to be ignored, but one I preformed previously (which doesn't
>necessarily have anything to do with this one.) This causes some relations
>in my database model to break.
>
>I know I could use INSERT OR FAIL and a subsequent SELECT, but that seems
>awkward and like unnecessary code bloat to me. Additionally I kind of
think,
>
>this breaks the description and sense of sqlite3_last_insert_rowid().
>
>SQlite version used is 3.3.16.
>
>Is this intentional? Any suggestions or should I file a ticket for this?
>
>Thanks!
>Mike

You supply NULL for the primary key, which in this case means
SQLite will make up a new id for you.

http://www.sqlite.org/lang_createtable.html :

Specifying a PRIMARY KEY normally just creates a UNIQUE index on
the corresponding columns. However, if primary key is on a
single column that has datatype INTEGER, then that column is
used internally as the actual key of the B-Tree for the table.
This means that the column may only hold unique integer values.
(Except for this one case, SQLite ignores the datatype
specification of columns and allows any kind of data to be put
in a column regardless of its declared datatype.) If a table
does not have an INTEGER PRIMARY KEY column, then the B-Tree key
will be a automatically generated integer.  The B-Tree key for a
row can always be accessed using one of the special names
"ROWID", "OID", or "_ROWID_". This is true regardless of whether
or not there is an INTEGER PRIMARY KEY. An INTEGER PRIMARY KEY
column can also include the keyword AUTOINCREMENT. The
AUTOINCREMENT keyword modified the way that B-Tree keys are
automatically generated. Additional detail on automatic B-Tree
key generation is available separately.


http://www.sqlite.org/autoinc.html :


When a new row is inserted into an SQLite table, the ROWID can
either be specified as part of the INSERT statement or it can be
assigned automatically by the database engine. To specify a
ROWID manually, just include it in the list of values to be
inserted. For example:

CREATE TABLE test1(a INT, b TEXT);
INSERT INTO test1(rowid, a, b) VALUES(123, 5, 'hello');

If no ROWID is specified on the insert, an appropriate ROWID is
created automatically. The usual algorithm is to give the newly
created row a ROWID that is one larger than the largest ROWID in
the table prior to the insert. 

And:
If a column has the type INTEGER PRIMARY KEY AUTOINCREMENT then
a slightly different ROWID selection algorithm is used. 


By supplying NULL as the key (ROWID) you actually don't specify
a value, so SQLite creates a new row with a new id.
If you want category to be unique, you will have to specify a
UNIQUE constraint for it.

HTH
-- 
  (  Kees Nuyt
  )
c[_]


-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] Index size in file

2007-10-04 Thread Michael Ruck
If you're running under constraints so low, you should take care choosing
the right
tools for the job. Apparently sqlite isn't the right tool for this job.

Mike

-Ursprüngliche Nachricht-
Von: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 5. Oktober 2007 00:19
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] Index size in file




Let's assume that my whole database can be in the cache. If my indexes have
duplicate data, then I will either need a bigger cache or have to page out
row
data in favour of index data.
In that case it will either be slower or require more memory to keep
duplicate
data for the indexes as opposed to referencing the original data.

Clive





John Stanton <[EMAIL PROTECTED]> on 05/10/2007 00:54:21

Please respond to sqlite-users@sqlite.org

To:   sqlite-users@sqlite.org
cc:(bcc: clive/Emultek)

Subject:  Re: [sqlite] Index size in file



Trevor Talbot wrote:
> On 10/4/07, John Stanton <[EMAIL PROTECTED]> wrote:
>
>>A B-Tree index holds keys in sorted sequence.  They are in random
>>sequence in the database.  That requires holding the keys in the B-Tree
>>nodes.
>
>
> Actually, it doesn't strictly require that; it could store references
> to the keys.  An obvious tradeoff is I/O; an index walk is less useful
> if you have to do random seeks to the locations of row data just to
> get the keys to walk the tree in the first place.  IOW in simplistic
> terms, an index walk suddenly doubles in disk I/O.
>
> The information on SQL Server would be interesting, as I know it
> stores sort keys under some conditions, which is effectively duplicate
> data.
>
One would need to be a paleontologist to measure the performance of an
ordered index with indirect key references.


-
To unsubscribe, send email to [EMAIL PROTECTED]

-









This footnote confirms that this email message has been scanned by
PineApp Mail-SeCure for the presence of malicious code, vandals & computer
viruses.














-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: AW: [sqlite] Re: In Mem Query Performance

2007-07-03 Thread John Stanton
Using run length encoding on your keys is an ingenious approach. 
Another idea would be to reverse the keys so the significant chars are 
first.  Splitting keys to produce the effect used in prefix b-trees cold 
also be an option.


Michael Ruck wrote:

As has been suggested numerous times, you should split the key. The keys
you've shown are very long and only differ in the last characters. You
should try yourself to split the key (maybe in two or three columns) and
order the key according to the change frequency. This way sqlite doesn't
have to run as many equality comparisons or detects difference earlier. Your
times are only as bad as they are due to the fact that most time is spent
searching for equality and hitting the differences after comparing 128
characters, which match most of the time (taken from your earlier mails.)

I would suggest splitting the key at least to two columns and changing the
order key according to change frequency in the columns. You won't have any
data loss, you just need to concatenate the columns again in your sql
queries to obtain results in the same format as previously.

Mike

-Ursprüngliche Nachricht-
Von: RaghavendraK 70574 [mailto:[EMAIL PROTECTED] 
Gesendet: Dienstag, 3. Juli 2007 12:12

An: sqlite-users@sqlite.org; [EMAIL PROTECTED]
Betreff: Re: [sqlite] Re: In Mem Query Performance

Hi Ken,

Thanks a lot.
But this would require the key to be short. Will Check if this is acceptable
to all as we may not be able to port old db data if the key format is
changed.

regards
ragha


**
 This email and its attachments contain confidential information from
HUAWEI, which is intended only for the person or entity whose address is
listed above. Any use of the information contained herein in any way
(including, but not limited to, total or partial disclosure, reproduction,
or dissemination) by persons other than the intended recipient(s) is
prohibited. If you receive this e-mail in error, please notify the sender by
phone or email immediately and delete it!
 


*

- Original Message -
From: Ken <[EMAIL PROTECTED]>
Date: Tuesday, July 3, 2007 0:46 am
Subject: Re: [sqlite] Re: In Mem Query Performance



Timings:

Calling Test Now
TIME_ms=1210
start=1183394494
end=1183394496

Calling Test Now
TIME_ms=1164
start=1183394504
end=1183394505

(time in Ms is 1.2 and 1.1 ... )

Data generated using:
#include 
#include 
#include 
#include 

using namespace std;


int main()
{
fstream fout("data.sql",ios::out);
fstream fout1("query.sql",ios::out);
 //Schema
 fout<<"create table test (name text not null, ser text not null,doid 
text,primary key (name, ser));"< pragma page_size=4096;
  > begin;
  > .read data.sql
  > commit;
 
= Performance code == 
//Compile:g++ -g main.cpp -I. -L. -lsqlite3  -lpthread


#include 
#include 
#include 
#include 

#include 

#include 
#include 

#include "sqlite3.h"

using namespace std;

int main()
{
struct timeb  startTime;
struct timeb  endTime;
doublems;

 
 sqlite3* db;

 int ret = sqlite3_open("abc.db",);

 char* errmsg;
 char buf[1024];

 sprintf(buf,"create temporary table ttest as select * from test; ");
  ret = sqlite3_exec(db , buf, NULL ,NULL,);
  if (ret != SQLITE_OK) {
 printf("failed to create temp table: %d\n", ret);
 exit(ret);
  }

  sprintf(buf,"create index tidx on ttest (name, ser);");
  ret = sqlite3_exec(db , buf, NULL ,NULL,);
  if (ret != SQLITE_OK) {
 printf("failed to create index: %d\n", ret);
 exit(ret);
  }


 //read all the srvName from file and store it in arr and query.
 fstream fin("query.sql",ios::in);
 string data[10];

  for(int j=0;!fin.eof();j++)
  {
 fin>>data[j];
 //cout<

AW: [sqlite] Re: In Mem Query Performance

2007-07-03 Thread Michael Ruck
That's not what I meant. I meant the following:

Take the key in the format
'kkk
k490' and split it into
two(66 characters each) /three (44 characters each) colums of equal length,
e.g. key0_0 = '', key0_1 =
'', key0_2 =
'k490'. If you can then reduce the
select to compare only key0_2, you reduced the work required to find a row
by 2/3. If you could even reverse the keys, you'd benefit even more - as the
difference would occur in the first few bytes. (All this is just guessing
based on your examples - without knowing the actual structure of your data.)

So you would have 3 columns instead of one. The ones, which change most
frequently will go into the primary key as it benefits most by not having
duplicates.

Once you've done that you should be able to reduce your query times
fundamentally by only applying a comparison to a certain column.

If that doesn't help and you still have a lot of equality in your keys, I'd
throw them out of the actual data and put them into some kind of dictionary
table and use joins to map them together. 

Given your current examples of your dataset, there isn't much that can be
done without optimizing the data for storage and lookup - this isn't
something where a set of compiler flags will help a lot. The optimizers are
smart, but they can't correct bad design in the first place. You have to
reduce the actual code being executed by the CPU (e.g. bytes to compare) to
get faster.

Mike


-Ursprüngliche Nachricht-
Von: RaghavendraK 70574 [mailto:[EMAIL PROTECTED] 
Gesendet: Dienstag, 3. Juli 2007 18:21
An: sqlite-users@sqlite.org; [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Betreff: Re: [sqlite] Re: In Mem Query Performance

Hi,

As said, i used a seperate ID (hash) but it made the performance very bad.
Now it takes minutes.[Version:  3.4.0]

regards
ragha

unsigned long idGen(const string & id) 
{
const char* __s = id.c_str();
unsigned long __h = 0; 
for ( ; *__s; ++__s)
{
   __h = 5*__h + *__s;
}
return size_t(__h); 
}

gen: program
int main()
{
 fstream fout("data.sql",ios::out);
 fstream fout1("query.sql",ios::out);
  fout<<"Begin;"<>data[j];
   //cout<AW: [sqlite] Re: In Mem Query Performance

You can also add a custom method like "StringReverse(stringtToRevers)" and 
reverse the keys on your insert and select-statements.

Sylko Zschiedrich


-Ursprüngliche Nachricht-
Von: Ken [mailto:[EMAIL PROTECTED] 
Gesendet: Dienstag, 3. Juli 2007 16:24
An: sqlite-users@sqlite.org; [EMAIL PROTECTED]
Betreff: Re: [sqlite] Re: In Mem Query Performance

Just a thought regarding this key issue.
 
 enhancement for sqlite, enable a "reverse Key" indexing method. Would this be 
a simple change?
 
 Ken
 
"[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote: 
Hi

>Hi Ken,
>
>Thanks a lot.
>But this would require the key to be short. Will Check if this is 
acceptable
>to all as we may not be able to port old db data if the key format is 
changed.
>

Perhaps the key can be modified only for comparation. You store the 
key as you want, but before compare it do a rle compression. You can 
store the rle compressed key in database too. Note that rle is a one-to-
one transform, that is  one key has only one compressed key and one 
compressed key has only one key. Working that way you can compare 
200,1,1,2 with 199,1,2,2 (four bytes) instead ..(196 '1')..2 
with ..(195 '1')..22.


HTH

>regards
>ragha


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: AW: [sqlite] Re: In Mem Query Performance

Ditto.
 
 my test case proved conclusively that the concatenated KEY string in the first 
example  is very poor. In general columns should not contain concatenated data. 
 
 The test results show timings with a 4k page_size and default cache size. The 
end result was a 1.2 to 1.1 second access timing:
  Hardware:
AMD x2 3800
2 gig ram
7200 rpm ide
(linux 2.6  kernel)
 
 

Michael Ruck <[EMAIL PROTECTED]> wrote: As has been suggested numerous times, 
you should split the key. The keys
you've shown are very long and only differ in the last characters. You
should try yourself to split the key (maybe in two or three columns) and
order the key according to the change frequency. This way sqlite doesn't
have to run as many equality comparisons or detects difference earlier. Your
times are only as bad as they are due to the fact that most time is spent
searching for equality and hitting the differences after comparing 128
characters, which match most of the time (taken from your earlier mails.)

I would suggest splitting the key at least to two columns and changing the
order key according to change frequency in the columns. You won't have any
data loss, you just need to concatenate the columns again in your sql
queries to obtain results in the same format as previously.

Mike

-Ursprüngliche Nachricht-
Von: RaghavendraK 70574 [mailto:[EMAIL PROTECTED] 
Gesendet: Dienstag, 3. Juli 2007 12:12
An: sqlite-users@sqlite.org; [EMAIL PROTECTED]
Betreff: Re: [sqlite] Re: In Mem Query Performance

Hi Ken,

Thanks a lot.
But this would require the key to be short. Will Check if this is acceptable
to all as we may not be able to port old db data if the key format is
changed.

regards
ragha


**
 This email and its attachments contain confidential information from
HUAWEI, which is intended only for the person or entity whose address is
listed above. Any use of the information contained herein in any way
(including, but not limited to, total or partial disclosure, reproduction,
or dissemination) by persons other than the intended recipient(s) is
prohibited. If you receive this e-mail in error, please notify the sender by
phone or email immediately and delete it!
 

*

- Original Message -
From: Ken 
Date: Tuesday, July 3, 2007 0:46 am
Subject: Re: [sqlite] Re: In Mem Query Performance

> Timings:
> 
> Calling Test Now
> TIME_ms=1210
> start=1183394494
> end=1183394496
> 
> Calling Test Now
> TIME_ms=1164
> start=1183394504
> end=1183394505
> 
> (time in Ms is 1.2 and 1.1 ... )
> 
> Data generated using:
> #include 
> #include 
> #include 
> #include 
> 
> using namespace std;
> 
> 
> int main()
> {
>  fstream fout("data.sql",ios::out);
>  fstream fout1("query.sql",ios::out);
>   //Schema
>   fout<<"create table test (name text not null, ser text not null,doid 
> text,primary key (name, ser));"<
> 
>  for(int j=0; j<10;j++)
>  {
> char* key = tmpnam(0);
>  string ser =
> 
> ".
> ";
> 
>  fout1<<
> fout<<"insert into test values
> ('"<<<"','"<<<"',"<<"'
> 22
> 222');"<
>  }
> 
>  return 0;
> }
> 
> 
> ### Load data using:
> sqlite3  abc.db
>> pragma page_size=4096;
>> begin;
>> .read data.sql
>> commit;
>   
> = Performance code == 
> //Compile:g++ -g main.cpp -I. -L. -lsqlite3  -lpthread
> 
> #include 
> #include 
> #include 
> #include 

> 
> #include 
> 
> #include 
> #include 
> 
> #include "sqlite3.h"
> 
> using namespace std;
> 
> int main()
> {
>  struct timeb  startTime;
>  struct timeb  endTime;
>  doublems;
> 
>   
>   sqlite3* db;
>   int ret = sqlite3_open("abc.db",);
> 
>   char* errmsg;
>   char buf[1024];
> 
>   sprintf(buf,"create temporary table ttest as select * from test; ");
>ret = sqlite3_exec(db , buf, NULL ,NULL,);
>if (ret != SQLITE_OK) {
>   printf("failed to create temp table: %d\n", ret);
>   exit(ret);
>}
> 
>sprintf(buf,"create index tidx on ttest (name, ser);");
>ret = sqlite3_exec(db , buf, NULL ,NULL,);
>if (ret != SQLITE_OK) {
>   printf("failed to create index: %d\n", ret);
>   exit(ret);
>}
> 
> 
>   //read all the srvName from file and store it in arr and query.
>   fstream fin("query.sql",ios::in);
>   string data[10];
> 
>for(int j=0;!fin.eof();j++)
>{
>   fin>>data[j];
>   //cout<<
>}
>fin.close();
> 
>cout<<"Calling Test Now"<
>sleep(1);
> 
>//Now Query Data.
>time_t start = time(0);
>ftime();
> 
> 
> 
>char* szError=0;
>const char* szTail=0;
>sqlite3_stmt* pVM;

AW: [sqlite] Re: In Mem Query Performance

As has been suggested numerous times, you should split the key. The keys
you've shown are very long and only differ in the last characters. You
should try yourself to split the key (maybe in two or three columns) and
order the key according to the change frequency. This way sqlite doesn't
have to run as many equality comparisons or detects difference earlier. Your
times are only as bad as they are due to the fact that most time is spent
searching for equality and hitting the differences after comparing 128
characters, which match most of the time (taken from your earlier mails.)

I would suggest splitting the key at least to two columns and changing the
order key according to change frequency in the columns. You won't have any
data loss, you just need to concatenate the columns again in your sql
queries to obtain results in the same format as previously.

Mike

-Ursprüngliche Nachricht-
Von: RaghavendraK 70574 [mailto:[EMAIL PROTECTED] 
Gesendet: Dienstag, 3. Juli 2007 12:12
An: sqlite-users@sqlite.org; [EMAIL PROTECTED]
Betreff: Re: [sqlite] Re: In Mem Query Performance

Hi Ken,

Thanks a lot.
But this would require the key to be short. Will Check if this is acceptable
to all as we may not be able to port old db data if the key format is
changed.

regards
ragha


**
 This email and its attachments contain confidential information from
HUAWEI, which is intended only for the person or entity whose address is
listed above. Any use of the information contained herein in any way
(including, but not limited to, total or partial disclosure, reproduction,
or dissemination) by persons other than the intended recipient(s) is
prohibited. If you receive this e-mail in error, please notify the sender by
phone or email immediately and delete it!
 

*

- Original Message -
From: Ken <[EMAIL PROTECTED]>
Date: Tuesday, July 3, 2007 0:46 am
Subject: Re: [sqlite] Re: In Mem Query Performance

> Timings:
> 
> Calling Test Now
> TIME_ms=1210
> start=1183394494
> end=1183394496
> 
> Calling Test Now
> TIME_ms=1164
> start=1183394504
> end=1183394505
> 
> (time in Ms is 1.2 and 1.1 ... )
> 
> Data generated using:
> #include 
> #include 
> #include 
> #include 
> 
> using namespace std;
> 
> 
> int main()
> {
>  fstream fout("data.sql",ios::out);
>  fstream fout1("query.sql",ios::out);
>   //Schema
>   fout<<"create table test (name text not null, ser text not null,doid 
> text,primary key (name, ser));"< 
>  for(int j=0; j<10;j++)
>  {
> char* key = tmpnam(0);
>  string ser =
> 
> ".
> ";
> 
>  fout1< fout<<"insert into test values
> ('"< 22
> 222');"<  }
> 
>  return 0;
> }
> 
> 
> ### Load data using:
> sqlite3  abc.db
>> pragma page_size=4096;
>> begin;
>> .read data.sql
>> commit;
>   
> = Performance code == 
> //Compile:g++ -g main.cpp -I. -L. -lsqlite3  -lpthread
> 
> #include 
> #include 
> #include 
> #include 
> 
> #include 
> 
> #include 
> #include 
> 
> #include "sqlite3.h"
> 
> using namespace std;
> 
> int main()
> {
>  struct timeb  startTime;
>  struct timeb  endTime;
>  doublems;
> 
>   
>   sqlite3* db;
>   int ret = sqlite3_open("abc.db",);
> 
>   char* errmsg;
>   char buf[1024];
> 
>   sprintf(buf,"create temporary table ttest as select * from test; ");
>ret = sqlite3_exec(db , buf, NULL ,NULL,);
>if (ret != SQLITE_OK) {
>   printf("failed to create temp table: %d\n", ret);
>   exit(ret);
>}
> 
>sprintf(buf,"create index tidx on ttest (name, ser);");
>ret = sqlite3_exec(db , buf, NULL ,NULL,);
>if (ret != SQLITE_OK) {
>   printf("failed to create index: %d\n", ret);
>   exit(ret);
>}
> 
> 
>   //read all the srvName from file and store it in arr and query.
>   fstream fin("query.sql",ios::in);
>   string data[10];
> 
>for(int j=0;!fin.eof();j++)
>{
>   fin>>data[j];
>   //cout<}
>fin.close();
> 
>cout<<"Calling Test Now"<sleep(1);
> 
>//Now Query Data.
>time_t start = time(0);
>ftime();
> 
> 
> 
>char* szError=0;
>const char* szTail=0;
>sqlite3_stmt* pVM;
> 
>sprintf(buf,"select * from ttest where name = ?" );
>ret = sqlite3_prepare_v2(db, buf, -1, , );
>if (ret != SQLITE_OK) exit(ret);
> 
>for(int k=0;k<10;k++)
>{
>   ret = sqlite3_bind_text(pVM, 1, data[k].c_str(), 
> data[k].length(), SQLITE_STATIC);
>   ret = sqlite3_step(pVM);
>   ret = sqlite3_reset(pVM);
>}
> 
> 
> 

AW: [sqlite] In Mem Query Performance

Don't use sqlite_get_table. Use sqlite3_prepare(_v2), sqlite3_step and
sqlite3_finalize/sqlite3_reset.

Mike 

-Ursprüngliche Nachricht-
Von: RaghavendraK 70574 [mailto:[EMAIL PROTECTED] 
Gesendet: Montag, 25. Juni 2007 13:48
An: sqlite-users@sqlite.org
Betreff: [sqlite] In Mem Query Performance


Hi,

We are using Sqlite in "in Memory Mode" and we have around 200 tables.
Each table has 10 columns of type text.
Each table has around 1 records each column has around 128bytes data.
Select performance is around 2000records/sec. Pls suggest if there is a way
to improve further.

Table structure,Query style is as below,

create table test1 ...200
(
  key0 text,
  key1 text,
  key2 text,
  key3 text,
  key4 text,
  nonKey0 text,
  nonKey1 text,
  nonKey2 text,
  nonKey3 text,
  nonKey4 text,
  primary key(key0,key1,key2,key3,key4,key5)
);

  Query Used..

 SELECT * FROM TABLE136 WHERE
key0='kk
kk490' AND
key1='kk
kk491' AND
key2='kk
kk492' AND
key3='kk
kk493' AND
key4='kk
kk494'

API used sqlite_get_table...

regards
ragha


**
 This email and its attachments contain confidential information from
HUAWEI, which is intended only for the person or entity whose address is
listed above. Any use of the information contained herein in any way
(including, but not limited to, total or partial disclosure, reproduction,
or dissemination) by persons other than the intended recipient(s) is
prohibited. If you receive this e-mail in error, please notify the sender by
phone or email immediately and delete it!
 

*


-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] Trigger on Attached Database

Just forgot to mention:
Referencing in previous open/attached databases should also be possible with
foreign key constraints, when they will be enforced...

Marc

On Thu, 2007-06-21 at 11:57 +0200, Andre du Plessis wrote:
>> Is it possible to do this:
>> 
>>Open DB1
>> 
>>Attatch DB2
>> 
>> In DB1 have a trigger that does
>> 
>>   Insert into DB2. ?
>> 
>> Theoretically it seems possible but we couldn't get it to work. Before I
>> investigate further just want to know if it is possible

>No. A trigger may only reference objects in it's own database.
>You should be getting an error message along the lines of
>"cannot reference object in database XXX" when you create
>the trigger.
>
>Dan.
>

It would be very helpful to have the possibility creating a trigger in DB2
referencing DB1.
"Open DB1"
"Attach DB2" (DB2 has trigger: "Insert into DB1")

Of course, if just
"Open DB2" (DB2 has trigger: "Insert into DB1")
This would give an error.

Marc



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] ATTACH and sqlite3_open()

Just call sqlite3_exec with the proper ATTACH as you would on the command
line. (Of course you could also do a prepare/step/finalize, but for ATTACH
sqlite3_exec is enough.)

Example:

sqlite3 *db = NULL;
/* ... */
sqlite3_exec(db, "ATTACH DATABASE 'filename' AS dbname", NULL, NULL,
);

Using SELECT sqlite_attach('filename','dbname',NULL) should also work -
haven't tried it though. This should have the benefit that the arguments can
be bound and the attach statement can be prepared. I'm not certain this is
possible with the ATTACH syntax. I'll try it soon though.

Mike

-Ursprüngliche Nachricht-
Von: Jon Scully [mailto:[EMAIL PROTECTED] 
Gesendet: Donnerstag, 10. Mai 2007 01:18
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] ATTACH and sqlite3_open()

Simpler than that.  I merely want to attach to two databases (files).
flash.db contains a set of tables that hold non-volatile data; ram.db
contains a set of tables that is re-built (volatile) on re-boot -- but
offers fast, read-only access.  No table-name overlaps, of course.

I want to access both sets of tables, seamlessly, as I should be able to do
using ATTACH, at the command prompt, but do so using the C API.

Just wondering how others do this (Using ATTACH?  Using sqlite3_open()?
Obviously I haven't looked very far into the
sqlite3_open() code to see how it's put together, etc.).

Thanks for the prompt reply.

On 5/9/07, Andrew Finkenstadt <[EMAIL PROTECTED]> wrote:
> On further inspection of your code fragment, it appears you aren't 
> really using (extra) attached  databases, but merely specifying an 
> alternative file to use if the first file is not available.  Calling 
> sqlite3_close(...) will do the right thing, by closing the actual 
> database that succeeded in opening.
>
> --andy


-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] Transaction journal corrupted by antivirus

> The problem is my application is used by thousand of customers.
> I cannot ask them to tweak their antivirus.

Why don't you encrypt the message content before storing?

Christian

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: AW: [sqlite] sqlite and borland c++ builder

The attachment someone sent me seemed to do the trick in terms of fixing
compilation errors, some of these other errors I am getting are kind of
unexplainable I think but unrelated to sqlite I think.  Maybe things are
conflicting, I'm not sure though but in terms of any sqlite compilation
issues the files I was sent seemed to clear that up. Now I think it is
strictly c++ builder giving me aggravation for whatever reason.

Thanks
- Jon 

-Original Message-
From: John Stanton [mailto:[EMAIL PROTECTED] 
Sent: Sunday, April 29, 2007 12:50 PM
To: sqlite-users@sqlite.org
Subject: Re: AW: [sqlite] sqlite and borland c++ builder

If that is his only problem all he has to do is some basic definitions 
for his compiler specifyng the Sqlite3 API components he is using.

Michael Ruck wrote:
> If I understand him correctly, he's having issues including the original
> sqlite3.h in his own sources too... He tried to build sqlite again to
solve
> that problem, but it remains there too.
> 
> I would recommed patching up sqlite3.h to conform to BC++ requirements -
> changing those structs to something the compiler understands.
> 
> Mike 
> 
> -Ursprüngliche Nachricht-
> Von: John Stanton [mailto:[EMAIL PROTECTED] 
> Gesendet: Sonntag, 29. April 2007 18:31
> An: sqlite-users@sqlite.org
> Betreff: Re: [sqlite] sqlite and borland c++ builder
> 
> Why not use gcc to compile your library, or use a precompiled DLL?
> 
> Jonathan Kahn wrote:
> 
>>Hi Ulrik,
>>  Thank you for responding.  I'll try anything!  The frustration that all
>>this has brought me I am determined to solve it.  
>>
>>  If I built SQLite with a C compiler what would be the result?  What
> 
> would
> 
>>I be able to work with from within c++?  Won't compiling leave me with an
>>executable?  I am fairly new to dealing with different compilers and
> 
> things,
> 
>>so please forgive my ignorance. 
>>
>>Thanks a lot,
>>- Jon
>>
>>
>>-Original Message-
>>From: Ulrik Petersen [mailto:[EMAIL PROTECTED] 
>>Sent: Sunday, April 29, 2007 2:29 AM
>>To: sqlite-users@sqlite.org
>>Subject: Re: [sqlite] sqlite and borland c++ builder
>>
>>Hi Jon,
>>
>>is it not an option to build SQLite with a C compiler, then call it from 
>>within C++?
>>
>>
>>Regards,
>>
>>Ulrik Petersen
>>
>>
>>Jonathan Kahn wrote:
>>
>>
>>>Even when I try to build a new dll I get errors with attach.c and it says
>>>cannot convert 'void *' to 'Db *', no matter what route I take I always
>>
>>hit
>>
>>
>>>a bump.  I'm just trying anything at this point
>>>
>>>- Jon
>>>
>>>-Original Message-
>>>From: Joe Wilson [mailto:[EMAIL PROTECTED] 
>>>Sent: Sunday, April 29, 2007 1:59 AM
>>>To: sqlite-users@sqlite.org
>>>Subject: Re: [sqlite] sqlite and borland c++ builder 
>>>
>>>I wrote too quickly - sqlite3.h correctly uses __cplusplus for extern
"C".
>>>
>>>But the almalgomated sqlite3.c cannot be compiled from a C++ compiler
>>>for the reasons described below.
>>>
>>>--- Joe Wilson <[EMAIL PROTECTED]> wrote:
>>> 
>>>
>>>
>>>>> When I try to use the header I get errors
>>>>>
>>>>>[C++ Error] sqlite3.h(1778): E2232 Constant member
>>>>>'sqlite3_index_info::nConstraint' in class without constructors
>>>>> 
>>>>
>>>>It appears it is trying to compile the sqlite header file as if it were
>>>>   
>>>
>>>C++.
>>> 
>>>
>>>
>>>>Lemme guess - you're using the almalgomated sqlite3.c from 3.3.17,
right?
>>>>
>>>
>>>>from the generated sqlite3.c:
>>>
>>>>/*
>>>>** Make sure we can call this stuff from C++.
>>>>*/
>>>>#if 0
>>>>extern "C" {
>>>>#endif
>>>>
>>>>See the #if 0? That's the problem. It should be:
>>>>
>>>>#if __cplusplus
>>>>
>>>>SQLite 3.3.17 has a bug in sqlite3.c generation.
>>>>To work around this issue, do this:
>>>>
>>>>extern "C" {
>>>>#include "sqlite3.h"
>>>>}
>>>>
>>>>
>>>>__
>>>&g

Re: AW: [sqlite] sqlite and borland c++ builder

If that is his only problem all he has to do is some basic definitions 
for his compiler specifyng the Sqlite3 API components he is using.


Michael Ruck wrote:

If I understand him correctly, he's having issues including the original
sqlite3.h in his own sources too... He tried to build sqlite again to solve
that problem, but it remains there too.

I would recommed patching up sqlite3.h to conform to BC++ requirements -
changing those structs to something the compiler understands.

Mike 


-Ursprüngliche Nachricht-
Von: John Stanton [mailto:[EMAIL PROTECTED] 
Gesendet: Sonntag, 29. April 2007 18:31

An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] sqlite and borland c++ builder

Why not use gcc to compile your library, or use a precompiled DLL?

Jonathan Kahn wrote:


Hi Ulrik,
 Thank you for responding.  I'll try anything!  The frustration that all
this has brought me I am determined to solve it.  


 If I built SQLite with a C compiler what would be the result?  What


would


I be able to work with from within c++?  Won't compiling leave me with an
executable?  I am fairly new to dealing with different compilers and


things,

so please forgive my ignorance. 


Thanks a lot,
- Jon


-Original Message-
From: Ulrik Petersen [mailto:[EMAIL PROTECTED] 
Sent: Sunday, April 29, 2007 2:29 AM

To: sqlite-users@sqlite.org
Subject: Re: [sqlite] sqlite and borland c++ builder

Hi Jon,

is it not an option to build SQLite with a C compiler, then call it from 
within C++?



Regards,

Ulrik Petersen


Jonathan Kahn wrote:



Even when I try to build a new dll I get errors with attach.c and it says
cannot convert 'void *' to 'Db *', no matter what route I take I always


hit



a bump.  I'm just trying anything at this point

- Jon

-Original Message-
From: Joe Wilson [mailto:[EMAIL PROTECTED] 
Sent: Sunday, April 29, 2007 1:59 AM

To: sqlite-users@sqlite.org
Subject: Re: [sqlite] sqlite and borland c++ builder 


I wrote too quickly - sqlite3.h correctly uses __cplusplus for extern "C".

But the almalgomated sqlite3.c cannot be compiled from a C++ compiler
for the reasons described below.

--- Joe Wilson <[EMAIL PROTECTED]> wrote:




When I try to use the header I get errors

[C++ Error] sqlite3.h(1778): E2232 Constant member
'sqlite3_index_info::nConstraint' in class without constructors



It appears it is trying to compile the sqlite header file as if it were
  


C++.




Lemme guess - you're using the almalgomated sqlite3.c from 3.3.17, right?




from the generated sqlite3.c:



/*
** Make sure we can call this stuff from C++.
*/
#if 0
extern "C" {
#endif

See the #if 0? That's the problem. It should be:

#if __cplusplus

SQLite 3.3.17 has a bug in sqlite3.c generation.
To work around this issue, do this:

extern "C" {
#include "sqlite3.h"
}


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 



  






-




To unsubscribe, send email to [EMAIL PROTECTED]

  






-



  



__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 










-
To unsubscribe, send email to [EMAIL PROTECTED]








-










-



To unsubscribe, send email to [EMAIL PROTECTED]








-













-
To unsubscribe, send email to [EMAIL PROTECTED]






-






-


To unsubscribe, send email to [EMAIL PROTECTED]




-




-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] sqlite and borland c++ builder

If I understand him correctly, he's having issues including the original
sqlite3.h in his own sources too... He tried to build sqlite again to solve
that problem, but it remains there too.

I would recommed patching up sqlite3.h to conform to BC++ requirements -
changing those structs to something the compiler understands.

Mike 

-Ursprüngliche Nachricht-
Von: John Stanton [mailto:[EMAIL PROTECTED] 
Gesendet: Sonntag, 29. April 2007 18:31
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] sqlite and borland c++ builder

Why not use gcc to compile your library, or use a precompiled DLL?

Jonathan Kahn wrote:
> Hi Ulrik,
>   Thank you for responding.  I'll try anything!  The frustration that all
> this has brought me I am determined to solve it.  
> 
>   If I built SQLite with a C compiler what would be the result?  What
would
> I be able to work with from within c++?  Won't compiling leave me with an
> executable?  I am fairly new to dealing with different compilers and
things,
> so please forgive my ignorance. 
> 
> Thanks a lot,
> - Jon
> 
> 
> -Original Message-
> From: Ulrik Petersen [mailto:[EMAIL PROTECTED] 
> Sent: Sunday, April 29, 2007 2:29 AM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] sqlite and borland c++ builder
> 
> Hi Jon,
> 
> is it not an option to build SQLite with a C compiler, then call it from 
> within C++?
> 
> 
> Regards,
> 
> Ulrik Petersen
> 
> 
> Jonathan Kahn wrote:
> 
>>Even when I try to build a new dll I get errors with attach.c and it says
>>cannot convert 'void *' to 'Db *', no matter what route I take I always
> 
> hit
> 
>>a bump.  I'm just trying anything at this point
>>
>>- Jon
>>
>>-Original Message-
>>From: Joe Wilson [mailto:[EMAIL PROTECTED] 
>>Sent: Sunday, April 29, 2007 1:59 AM
>>To: sqlite-users@sqlite.org
>>Subject: Re: [sqlite] sqlite and borland c++ builder 
>>
>>I wrote too quickly - sqlite3.h correctly uses __cplusplus for extern "C".
>>
>>But the almalgomated sqlite3.c cannot be compiled from a C++ compiler
>>for the reasons described below.
>>
>>--- Joe Wilson <[EMAIL PROTECTED]> wrote:
>>  
>>
  When I try to use the header I get errors

 [C++ Error] sqlite3.h(1778): E2232 Constant member
'sqlite3_index_info::nConstraint' in class without constructors
  
>>>
>>>It appears it is trying to compile the sqlite header file as if it were
>>>
>>
>>C++.
>>  
>>
>>>Lemme guess - you're using the almalgomated sqlite3.c from 3.3.17, right?
>>>
>>>from the generated sqlite3.c:
>>>
>>> /*
>>> ** Make sure we can call this stuff from C++.
>>> */
>>> #if 0
>>> extern "C" {
>>> #endif
>>>
>>>See the #if 0? That's the problem. It should be:
>>>
>>> #if __cplusplus
>>>
>>>SQLite 3.3.17 has a bug in sqlite3.c generation.
>>>To work around this issue, do this:
>>>
>>> extern "C" {
>>> #include "sqlite3.h"
>>> }
>>>
>>>
>>>__
>>>Do You Yahoo!?
>>>Tired of spam?  Yahoo! Mail has the best spam protection around 
>>>http://mail.yahoo.com 
>>>
>>>
>>>
>>
>

> 
>>-
>>  
>>
>>>To unsubscribe, send email to [EMAIL PROTECTED]
>>>
>>>
>>
>

> 
>>-
>>  
>>
>>>
>>
>>
>>__
>>Do You Yahoo!?
>>Tired of spam?  Yahoo! Mail has the best spam protection around 
>>http://mail.yahoo.com 
>>
>>
> 
>

> 
>>-
>>To unsubscribe, send email to [EMAIL PROTECTED]
>>
> 
>

> 
>>-
>>
>>
>>
> 
>

> -
> 
>>To unsubscribe, send email to [EMAIL PROTECTED]
>>
> 
>

> -
> 
>>
>>  
> 
> 
> 
>

> -
> To unsubscribe, send email to [EMAIL PROTECTED]
>

> -
> 
> 
>

-
> To unsubscribe, send email to [EMAIL PROTECTED]
>

-
> 



-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] INSTEAD OF Trigger Question

Thank you! It work's fine..

Sylko

-Ursprüngliche Nachricht-
Von: Stephen Oberholtzer [mailto:[EMAIL PROTECTED] 
Gesendet: Montag, 23. April 2007 15:42
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] INSTEAD OF Trigger Question

On 4/23/07, Sylko Zschiedrich <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> i have a question to "instead of triggers" on views.
>
> Following schema:
>

h>
> That's my current implementation. But with this I can't update the View
> to 'null' values because the coalesce statement will return the
> old.values.
>
> How can I handle that?
> Can I "detect" the columns to update?


Actually, you don't need to. the OLD and NEW pseudotables don't
contain just the data affected by the update; it includes the
*complete* row before the update (in OLD) and after the update (in
NEW).

I have attached a sqlite3 script demonstrating this; just run it with

sqlite3 < sqlite-view-update.sql

to see the results.

A final warning: there is a gross inefficiency in the way updates on
views are handled. If your view is big -- i.e. "select count(*) from
myview" reports more than about 100K rows -- your update is going to
take inexplicably long.  I'm pondering a patch for that.

-- 
-- Stevie-O
Real programmers use COPY CON PROGRAM.EXE

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] Re: DB design questions

Thanks for your response. Do you have a recommendation for a simpler data
store, which supports only simple queries (like, equals, not equals on
attributes) and transactions?

Thanks,
Mike

-Ursprüngliche Nachricht-
Von: A. Pagaltzis [mailto:[EMAIL PROTECTED] 
Gesendet: Samstag, 21. April 2007 21:17
An: sqlite-users@sqlite.org
Betreff: [sqlite] Re: DB design questions

* Michael Ruck <[EMAIL PROTECTED]> [2007-04-20 16:15]:
> Is there anyone who has experience with this kind of design, do you 
> have better ideas on modelling this kind of data?

This is actually a very typical approach to storing arbitrarily structured
data entities in an SQL database that everyone discovers independently, much
like the adjancecy model is the first thing anyone comes up with for storing
trees in an SQL database.

The problem with this sort of schema (just as with the adjacency
model) is that it makes it very hard to formulate any kind of interesting
query over the data. You’d need a vendor-specific facility for recursive
queries in order to ask anything non- trivial of the database, but such
queries are expensive even where supported, which in SQLite they’re not.
Essentially, you are reducing the SQL engine to a dumb backend store
incapable of complex query logic; complex queries have to be performed in
application code after retrieving the entire set of possibly- relevant data.

You’re better off using some other kind of data store than an SQL database
if you really need storage for that kind of model.

Regards,
--
Aristotle Pagaltzis // 


-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] Still getting "Insertion failed because database isfull." errors

> the database residing on removable media.  When the system returns,
the
> "pointer" to the media is not guaranteed to work again.  In other
words,

The file handle remains perfectly valid when the media has not been
removed or changed. Besides, I've observed that sometimes the media is
not accessible at all after an application was trying to use one of the
open file handles while the media is being remounted. Maybe that's just
a sign for a poorly written driver and thus device dependent...

Greetings, Christian

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] Still getting "Insertion failed because database is full." errors

Hello Joel!

We were faced with similar problems in the field, too. Those were more
general ones with PCMCIA/CF/SD cards.

The reason was that the mobile devices (different device types with
Windows CE 4.1 and 5.0) doesn't handle the access to removable media
gracefully when the device is going to suspend and resuming from
suspend.

We had to learn (the hard way) that it's an application's task to block
*any* file (reading and writing) access before the device is going to
suspend until the time the device resumed from suspend *and* the
removable media has been successfully remounted. The remount process can
take up to 5-10 seconds!

If your application doesn't handle those cases well you'll run into
problems. Typically, you will see that kind of problems only in the
field and not when doing in-house tests...

Greetings, Christian

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] Still getting "Insertion failed because database is full." errors

Unfortunately DEBUG builds change timing entirely on windows platforms. I
would suggest creating a release build with symbols.

Mike

-Ursprüngliche Nachricht-
Von: Joel Cochran [mailto:[EMAIL PROTECTED] 
Gesendet: Dienstag, 17. April 2007 20:59
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] Still getting "Insertion failed because database is
full." errors

I've had him sit beside my in my office and attempt to recreate it, both
using his device and mine, but it never happens.  Actually, I did get it to
happen once on his machine, but I was not connected to my PC, so I connected
and tried to recreate it through DEBUG but could not.  The last time it
happened in the field, I had him stop using the device and bring it to me so
that I could see the Stack Trace (which I sent to the list).  With neither
his device nor mine can I recreate the problem in DEBUG.  It is very
frustrating.

Thanks,

Joel


On 4/17/07, Jonas Sandman <[EMAIL PROTECTED]> wrote:
>
> What is the guy on in the field doing that you are not? Are you using 
> his device for the testing?
> Since it takes minutes for him to encounter the error it can't be that 
> hard to recreate. Follow him around for an hour or so and see how he 
> uses the program. It could easily be something he's doing that you 
> aren't...
>
> On 4/17/07, Joel Cochran <[EMAIL PROTECTED]> wrote:
> >
> > The saga continues...
> >
> > I was very excited by the idea that there was something wrong with 
> > the
> CF
> > Card.  The theory fits all the facts: it explains why the original 
> > database threw unspecified errors, it explains why now SQLite is 
> > throwing errors, it explains why I can't reproduce the problem in 
> > house or on my
> machine.  It
> > just seemed to explain everything, so yesterday I went out and 
> > bought a brand-spankin' new SanDisk CF card.  I loaded it up with 
> > the database, installed it on my tester's machine, and this morning 
> > it went back out
> to
> > the field for testing.
> >
> > Within minutes, he encountered the same error.
> >
> > Now I just don't believe the problem is with the card, so I feel 
> > that I
> am
> > right back at square one.  I'm really at my wits end and don't know 
> > what to do next.  I am going to go ahead and install the database on 
> > the device memory instead of removable media, just to test it out, 
> > but I have no faith that it will change anything.  When that fails, 
> > I will send the tester
> out
> > with another device entirely, but again I expect the same results.
> >
> > I'm convinced now that the problem is with the application 
> > architecture, but I have no idea what to look at anymore.  I've 
> > stared and fiddled with
> this
> > code so much that I'm ready to throw in the towel.  But since I'd 
> > like
> to
> > keep my job that isn't an option.  If I had hair, I'd pull it out.
> >
> > Any help at all would be appreciated.
> >
> > --
> > Joel Cochran
> >
> >
> >
> > On 4/13/07, Michael Ruck <[EMAIL PROTECTED]> wrote:
> > >
> > > Unless things have changed recently, the following should still be
> valid
> > > for
> > >
> > > Windows Mobile/Windows CE devices: Usually these devices do not 
> > > power
> > off,
> > > but
> > > stay in a standby state where the memory is always powered. Check 
> > > if that's the case with your system and move as much as possible 
> > > into RAM or a RAM
> > disk,
> > > if that
> > > feature is provided by the windows mobile kernel built for your
> device.
> > >
> > > If that's not possible, I'd suggest replacing CF cards with micro
> drives
> > -
> > > these
> > > are regular hard drives in a CF card format. I'm not up to date on
> > storage
> > > space,
> > > but should be sufficient for your needs.
> > >
> > > To test the cards I'd put them in a card reader format it and fill 
> > > it completely up with zeros. When a flash card erases a byte, it 
> > > sets all bits to
> ones
> > > and
> > > upon
> > > write clears those, which need to be zero. So to test all bits you
> > really
> > > need to
> > > zero out the entire card. This will also give the controller in 
> > > the
> card
> > a
> > > chance
> > > to remap bad sectors with spares. Finally you determine the file 
> > > size
> of
> > > the
> > > card,
> > > when you receive the first write error. This is (approximately) 
> > > the
> > number
> > > of bytes
> > > the card can store (at that point in time) and falling.
> > >
> > > It seems some cards even return "read errors", when they hit a
> defective
> > > sector
> > > upon read. Maybe the actual error code just gets lost/mangled on 
> > > the
> way
> > > up
> > > and the
> > > actual error is just a simple read error ;) I've seen reports 
> > > about
> this
> > > with some
> > > digital cameras, which would not even let people view the pictures
> taken
> > a
> > > minute
> > > ago.
> > >
> > > Mike
> > >
> > > -Ursprüngliche Nachricht-
> > > Von: John Stanton [mailto:[EMAIL PROTECTED]
> > > Gesendet: Freitag, 13. April 2007 

AW: [sqlite] Still getting "Insertion failed because database is full." errors

Unless things have changed recently, the following should still be valid for

Windows Mobile/Windows CE devices: Usually these devices do not power off,
but 
stay in a standby state where the memory is always powered. Check if that's
the 
case with your system and move as much as possible into RAM or a RAM disk,
if that 
feature is provided by the windows mobile kernel built for your device.

If that's not possible, I'd suggest replacing CF cards with micro drives -
these
are regular hard drives in a CF card format. I'm not up to date on storage
space,
but should be sufficient for your needs.

To test the cards I'd put them in a card reader format it and fill it
completely 
up with zeros. When a flash card erases a byte, it sets all bits to ones and
upon
write clears those, which need to be zero. So to test all bits you really
need to
zero out the entire card. This will also give the controller in the card a
chance
to remap bad sectors with spares. Finally you determine the file size of the
card,
when you receive the first write error. This is (approximately) the number
of bytes
the card can store (at that point in time) and falling.

It seems some cards even return "read errors", when they hit a defective
sector
upon read. Maybe the actual error code just gets lost/mangled on the way up
and the
actual error is just a simple read error ;) I've seen reports about this
with some
digital cameras, which would not even let people view the pictures taken a
minute
ago.

Mike

-Ursprüngliche Nachricht-
Von: John Stanton [mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 13. April 2007 23:44
An: [EMAIL PROTECTED]
Betreff: Re: [sqlite] Still getting "Insertion failed because database is
full." errors

You might find some joy in the baby disk drives such as installed in the
original ipods.

Can you substitute RAM with a battery backup if the memory card is always in
the device?

Joel Cochran wrote:
> Thanks John and Dennis,
> At least now I have something to look at.  I will look into the CF 
> problem next.
> 
> The database itself gets generated on a PC and then transferred to the 
> CF Card.  During testing and development, this could have been 20-30 
> times a day, constantly erasing and recreating the existing DB.  We 
> have also sent large numbers of JPGs along with the database in the 
> past (there are none now, but have been before).  So these cards have 
> been written over a lot, perhaps that is the problem.
> 
> I think to test this, I will send the device back to the field with a 
> brand new card and see if the problem persists.  If the user can go 
> several days of normal use without the problem, then I'll be convinced 
> that it is the card.  Out of curiosity I just checked the CF cards 
> we've been using: on the development machine (which has NEVER shown 
> the error) I have a SanDisk CF Card.  On the Testing machine that is 
> having the problem, there is a PNY Technologies CF Card.  I wouldn't 
> be surprised if the SanDisk card isn't simply better than the PNY 
> card, so there is something else to consider.
> 
> Once actual field use begins, the database will be replaced every week 
> or so, along with a fair number of images (like 100-300 a week).  The 
> purpose of the application would have every record in the database 
> being updated and some new ones created.  And it would be that way 
> week in and week out, essentially forever.  We may eventually port it 
> over to very small Tablet PCs, but right now it is all Windows Mobile 
> 5.  This is one of the reasons I went with SQLite, so that down the 
> road I wouldn;t have to reinvent the database piece of the software 
> for a different platform.
> 
> Given all this, I will definitely look into the link Dennis sent.  The 
> company is not going to be happy replacing CF cards all the time, so 
> if that can extend the wear then it will be welcome.
> 
> Thanks a lot,
> 
> Joel
> 
> On 4/13/07, Dennis Cote <[EMAIL PROTECTED]> wrote:
> 
>>
>> Joel Cochran wrote:
>> >
>> > Or do you mean over the course of the lifetime of a CF card it can 
>> > only be used so much?  That might apply to this scenario, these 
>> > cards have been written over continuously for the last 6 months.
>> >
>> Joel,
>>
>> Yes, that is exactly the problem. You should look at using a flash 
>> file system such as http://sourceware.org/jffs2/ that uses "wear
leveling"
>> algorithms to spread the writes over all the flash devices blocks if 
>> you are writing often.
>>
>> HTH
>> Dennis Cote
>>
>>
>> -
>> 
>>
>> To unsubscribe, send email to [EMAIL PROTECTED]
>>
>> -
>> 
>>
>>
>>
> 
> 



-
To unsubscribe, send email to [EMAIL PROTECTED]

-




Re: AW: [sqlite] Still getting "Insertion failed because database is full." errors

If something passes all tests but fails later then it is very likely 
failing where testing was not performed, such as the hardware under 
other conditions.


Michael Ruck wrote:

Hi,

Is this the only device seeing this error or are *all* devices seeing this
error? Have you checked the CF card? May be its just the card, which is
corrupt and you're hitting these bugs at points, where the file system is
hitting a bad sector.

Is this running in a transaction?

Mike 


-Ursprüngliche Nachricht-
Von: Joel Cochran [mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 13. April 2007 17:46

An: [EMAIL PROTECTED]
Betreff: [sqlite] Still getting "Insertion failed because database is full."
errors

Hi folks,

I had sent this message out a couple of weeks ago, and I am still searching
for a solution.  I looked at the application design and made a modest
change: I now create a single SQLiteConnection and use it from start to
finish of the application.  Previously I was creating multiple connections
(and disconnecting, disposing them and then running GC), but all that has
been replaced with the single connection approach.

At first I thought this had solved the problem, because all in house testing
runs beautifully.  However, as soon as the device is sent to the field, the
error starts again.  Unfortunately, it means that I have never been able to
catch this in debug.  I did, however, change the error reporting a little
and got some more information.  The SQLiteException I am not getting
includes this information:

Insertion failed because the database is full database or disk is full

at System.Data.SQLite.SQLite3.Reset()
at System.Data.SQLite.SQLite3.Step()
at System.Data.SQLite.SQLiteDataReader.NextResult()
at System.Data.SQLite.SQLiteDataReader.ctor()
at System.Data.SQLite.SQLiteCommand.ExecuteReader()
at System.Data.SQLite.SQLiteCommand.ExecuteDbDataReader()
at ... etc etc


I downloaded the C source and tried to read through it, but honestly I am
not a C programmer and didn't get very far.

Other than a possible bug, the only thing I can think of is that there is
something fundamentally wrong with my architecture.  What I can't get, is
why the message has anything to do with inserting.  While there are several
actions in my product that Insert into the database, the error never occurs
at those points.  If I understood what was being inserted, perhaps I could
figure out a soultion.

If anyone can help, I'd greatly appreciate it.  The original message is
included below this one.

TIA,

Joel


-- Original message --
Hi all,

I have searched the web, the newsgroups, and the archives, but all I can
find is one reference to someone having this trouble with Python and a bunch
of references that list this as predefined error #13 according to
http://www.sqlite.org/capi3.html.

What I can't find is any help in determining why a program might receive
this error.  The database is only 203KB and has 6 tables (maybe 1,000 rows
in all the tables combined) running off a 1GB CompactFlash card on a Windows
Mobile 5 device with 256MB of onboard RAM: 50MB of that is dedicated to
programs and the rest is storage.  The only app running on the device is the
one in question.  The error occurs at seemingly random times and forces the
user to end the application and start over.

At this point I don't know much else: the user is in the field and I will
have the device back late this afternoon for debugging.  I was hoping to
have a head start before I get the device, because as it is I have no idea
what the cause could be.
-- End Original message --


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] Still getting "Insertion failed because database is full." errors

Hi,

Is this the only device seeing this error or are *all* devices seeing this
error? Have you checked the CF card? May be its just the card, which is
corrupt and you're hitting these bugs at points, where the file system is
hitting a bad sector.

Is this running in a transaction?

Mike 

-Ursprüngliche Nachricht-
Von: Joel Cochran [mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 13. April 2007 17:46
An: [EMAIL PROTECTED]
Betreff: [sqlite] Still getting "Insertion failed because database is full."
errors

Hi folks,

I had sent this message out a couple of weeks ago, and I am still searching
for a solution.  I looked at the application design and made a modest
change: I now create a single SQLiteConnection and use it from start to
finish of the application.  Previously I was creating multiple connections
(and disconnecting, disposing them and then running GC), but all that has
been replaced with the single connection approach.

At first I thought this had solved the problem, because all in house testing
runs beautifully.  However, as soon as the device is sent to the field, the
error starts again.  Unfortunately, it means that I have never been able to
catch this in debug.  I did, however, change the error reporting a little
and got some more information.  The SQLiteException I am not getting
includes this information:

Insertion failed because the database is full database or disk is full

at System.Data.SQLite.SQLite3.Reset()
at System.Data.SQLite.SQLite3.Step()
at System.Data.SQLite.SQLiteDataReader.NextResult()
at System.Data.SQLite.SQLiteDataReader.ctor()
at System.Data.SQLite.SQLiteCommand.ExecuteReader()
at System.Data.SQLite.SQLiteCommand.ExecuteDbDataReader()
at ... etc etc


I downloaded the C source and tried to read through it, but honestly I am
not a C programmer and didn't get very far.

Other than a possible bug, the only thing I can think of is that there is
something fundamentally wrong with my architecture.  What I can't get, is
why the message has anything to do with inserting.  While there are several
actions in my product that Insert into the database, the error never occurs
at those points.  If I understood what was being inserted, perhaps I could
figure out a soultion.

If anyone can help, I'd greatly appreciate it.  The original message is
included below this one.

TIA,

Joel


-- Original message --
Hi all,

I have searched the web, the newsgroups, and the archives, but all I can
find is one reference to someone having this trouble with Python and a bunch
of references that list this as predefined error #13 according to
http://www.sqlite.org/capi3.html.

What I can't find is any help in determining why a program might receive
this error.  The database is only 203KB and has 6 tables (maybe 1,000 rows
in all the tables combined) running off a 1GB CompactFlash card on a Windows
Mobile 5 device with 256MB of onboard RAM: 50MB of that is dedicated to
programs and the rest is storage.  The only app running on the device is the
one in question.  The error occurs at seemingly random times and forces the
user to end the application and start over.

At this point I don't know much else: the user is in the field and I will
have the device back late this afternoon for debugging.  I was hoping to
have a head start before I get the device, because as it is I have no idea
what the cause could be.
-- End Original message --


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] Still getting "Insertion failed because database is full." errors

Guessing from his call stack he's doing a select. ExecuteReader executes a
statement, which must return a resultset (aka select.) 

-Ursprüngliche Nachricht-
Von: John Stanton [mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 13. April 2007 17:57
An: [EMAIL PROTECTED]
Betreff: Re: [sqlite] Still getting "Insertion failed because database is
full." errors

Are you sure that you are not exceeding the capacity of the flash memory to
handle writes?

Joel Cochran wrote:
> Hi folks,
> 
> I had sent this message out a couple of weeks ago, and I am still 
> searching for a solution.  I looked at the application design and made 
> a modest
> change: I now create a single SQLiteConnection and use it from start 
> to finish of the application.  Previously I was creating multiple 
> connections (and disconnecting, disposing them and then running GC), 
> but all that has been replaced with the single connection approach.
> 
> At first I thought this had solved the problem, because all in house 
> testing runs beautifully.  However, as soon as the device is sent to 
> the field, the error starts again.  Unfortunately, it means that I 
> have never been able to catch this in debug.  I did, however, change 
> the error reporting a little and got some more information.  The 
> SQLiteException I am not getting includes this information:
> 
> Insertion failed because the database is full database or disk is full
> 
> at System.Data.SQLite.SQLite3.Reset()
> at System.Data.SQLite.SQLite3.Step()
> at System.Data.SQLite.SQLiteDataReader.NextResult()
> at System.Data.SQLite.SQLiteDataReader.ctor()
> at System.Data.SQLite.SQLiteCommand.ExecuteReader()
> at System.Data.SQLite.SQLiteCommand.ExecuteDbDataReader()
> at ... etc etc
> 
> 
> I downloaded the C source and tried to read through it, but honestly I 
> am not a C programmer and didn't get very far.
> 
> Other than a possible bug, the only thing I can think of is that there 
> is something fundamentally wrong with my architecture.  What I can't 
> get, is why the message has anything to do with inserting.  While 
> there are several actions in my product that Insert into the database, 
> the error never occurs at those points.  If I understood what was 
> being inserted, perhaps I could figure out a soultion.
> 
> If anyone can help, I'd greatly appreciate it.  The original message 
> is included below this one.
> 
> TIA,
> 
> Joel
> 
> 
> -- Original message -- Hi all,
> 
> I have searched the web, the newsgroups, and the archives, but all I 
> can find is one reference to someone having this trouble with Python 
> and a bunch of references that list this as predefined error #13 
> according to http://www.sqlite.org/capi3.html.
> 
> What I can't find is any help in determining why a program might 
> receive this error.  The database is only 203KB and has 6 tables 
> (maybe 1,000 rows in all the tables combined) running off a 1GB 
> CompactFlash card on a Windows Mobile 5 device with 256MB of onboard 
> RAM: 50MB of that is dedicated to programs and the rest is storage.  
> The only app running on the device is the one in question.  The error 
> occurs at seemingly random times and forces the user to end the 
> application and start over.
> 
> At this point I don't know much else: the user is in the field and I 
> will have the device back late this afternoon for debugging.  I was 
> hoping to have a head start before I get the device, because as it is 
> I have no idea what the cause could be.
> -- End Original message --
> 



-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] Exclusive Access

You can only open one connection in exclusive mode - even in one process.

-Ursprüngliche Nachricht-
Von: John Stanton [mailto:[EMAIL PROTECTED] 
Gesendet: Dienstag, 10. April 2007 20:14
An: sqlite-users@sqlite.org
Betreff: [sqlite] Exclusive Access

A quick question in case someone has experience before I delve into the
source and write some test programs.

Is there any impediment for two threads to have concurrent read mode access
to a database opened exclusively by a process?  Is it feasible that they use
the same database connection or would they each need their own database
connection and use shared cache mode?

Ideally I should like them to share a connection and a common local cache
but can think of many reasons why that might not be possible.



-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: AW: AW: AW: [sqlite] Function Language

Thankyou for the links.  AppWeb has a different paradigm from what I am 
building but makes an interesting study.


on Scully wrote:

AppWeb might be suitable for what you need:
http://www.appwebserver.org/
http://www.appwebserver.org/products/appWeb/doc/api/gen/appweb/esp_8js.html
http://www.appwebserver.org/products/appWeb/doc/api/gen/appweb/c-api.html

It's small-footprint, extensible through modules or compile-time (C or
C++ API), server-side JavaScript and is easy to integrate SQLite's C
API, using one line of integration code, per
http://www.appwebserver.org/forum/viewtopic.php?t=880=sqlite3

(Have no affiliation with AppWeb, etc. Just started using it, lately.)

HTH

On 4/6/07, John Stanton <[EMAIL PROTECTED]> wrote:


Sure.

Michael Ruck wrote:
> If you come up with something, please share it.
>
> Mike
>
> -Ursprüngliche Nachricht-
> Von: John Stanton [mailto:[EMAIL PROTECTED]
> Gesendet: Freitag, 6. April 2007 20:49
> An: sqlite-users@sqlite.org
> Betreff: Re: AW: AW: [sqlite] Function Language
>
> By using making the connection from browser to server an RPC model I 
have
> mapped the interface to the database instead of trying to map the 
database
> to the Javascript objects.  That also minimizes the network 
traffic.  My
> application server design has a criterion that network traffic 
should be

> minimized.  It is a great way to increase effective bandwidth.
>
> The generated HTML/Javascript pages have no redundancy and are 
compressed if
> they are above a threshold size.  The RPC between the browser and 
the server
> uses sparse messages and not a great deal of redundancy.  It also 
minimizes
> the extent of execution of the slow, interpreted Javascript by 
partitioning
> as much basic housekeeping to the more efficient server side 
processing.

>
> The server holds each database open as exclusive and shares it 
between users
> and multiple user connections, minimizing database open and close 
actions,

> keeping local cacheing and avoiding file locking.
>
> Currently I am dreaming up ways of implementing the Javascript function
> level in Sqlite.  Creating the JS VM when the DB opens and having 
one VM per

> open DB instance seems to be a way of avoiding contentions and getting
> fairly efficient execution and a reasonable route for an initial 
prototype.

> Garbage collection should take care of stale objects.  I can store user
> defined functions as text items in a dedicated table defining 
aggregate and
> scalar functions in a database.  A syntax directed editor linked to 
JSLint

> and RCS can maintain syntactically correct code with version control.
>
> Javascript would not be a good way to implement simple functions in 
Sqlite,
> the current custome function interface to native code is far 
prefereable for
> that but it is appropriate for implementing larger functions and one 
which

> require frequent user alterations.
>
> Michael Ruck wrote:
>
>>How do you treat objects containing other objects? JS has the
>>capabilities to build powerful object models with hashes etc. Objects,
>>such as these don't map nicely to the relational model.
>>
>>I'm going a different way - I'm using static HTML, which requests JSON
>>from the server and uses this to update the UI on the client side.
>>However mapping JSON to SQL is still somewhat of an issue. I've
>>tackled it by including metadata in JSON, but that's not very clean
>>and I'm not really happy with it (yet.)
>>
>>Mike
>>
>>-Ursprüngliche Nachricht-
>>Von: John Stanton [mailto:[EMAIL PROTECTED]
>>Gesendet: Freitag, 6. April 2007 18:22
>>An: sqlite-users@sqlite.org
>>Betreff: Re: AW: [sqlite] Function Language
>>
>>Thankyou for the thoughtful comments.  It strikes me that a JS object
>>and an Sqlite row map nicely.  When I was writing the part of my
>>application server which encapsulates Sqlite rows in JSON I was struck
>>by how simple the interface was, particularly compared to XML which
>>involves a little more attention to create well formed XML and a whole
>>lot more involvement to parse and generate on the client side.  Adding
>>JSON as an alternative to XML was a good idea.
>>
>>I do not try to create dynamic HTML pages using JS and use a much
>>simpler and more efficent application specific language compiled to
>>byte code (somewhat analogous to Java or VDBE bytecode).  At that
>>level JS is merged into the page in such a way that the JS is matched
>>to the browser and locale to remove redundancy.
>>
>>The sophistication in the otherwise simple application specific
>>language is an inference engine to resolve a knowledge base of rules
>>stored in the Sqlite database and an

Re: AW: AW: AW: [sqlite] Function Language


AppWeb might be suitable for what you need:
http://www.appwebserver.org/
http://www.appwebserver.org/products/appWeb/doc/api/gen/appweb/esp_8js.html
http://www.appwebserver.org/products/appWeb/doc/api/gen/appweb/c-api.html

It's small-footprint, extensible through modules or compile-time (C or
C++ API), server-side JavaScript and is easy to integrate SQLite's C
API, using one line of integration code, per
http://www.appwebserver.org/forum/viewtopic.php?t=880=sqlite3

(Have no affiliation with AppWeb, etc. Just started using it, lately.)

HTH

On 4/6/07, John Stanton <[EMAIL PROTECTED]> wrote:

Sure.

Michael Ruck wrote:
> If you come up with something, please share it.
>
> Mike
>
> -Ursprüngliche Nachricht-
> Von: John Stanton [mailto:[EMAIL PROTECTED]
> Gesendet: Freitag, 6. April 2007 20:49
> An: sqlite-users@sqlite.org
> Betreff: Re: AW: AW: [sqlite] Function Language
>
> By using making the connection from browser to server an RPC model I have
> mapped the interface to the database instead of trying to map the database
> to the Javascript objects.  That also minimizes the network traffic.  My
> application server design has a criterion that network traffic should be
> minimized.  It is a great way to increase effective bandwidth.
>
> The generated HTML/Javascript pages have no redundancy and are compressed if
> they are above a threshold size.  The RPC between the browser and the server
> uses sparse messages and not a great deal of redundancy.  It also minimizes
> the extent of execution of the slow, interpreted Javascript by partitioning
> as much basic housekeeping to the more efficient server side processing.
>
> The server holds each database open as exclusive and shares it between users
> and multiple user connections, minimizing database open and close actions,
> keeping local cacheing and avoiding file locking.
>
> Currently I am dreaming up ways of implementing the Javascript function
> level in Sqlite.  Creating the JS VM when the DB opens and having one VM per
> open DB instance seems to be a way of avoiding contentions and getting
> fairly efficient execution and a reasonable route for an initial prototype.
> Garbage collection should take care of stale objects.  I can store user
> defined functions as text items in a dedicated table defining aggregate and
> scalar functions in a database.  A syntax directed editor linked to JSLint
> and RCS can maintain syntactically correct code with version control.
>
> Javascript would not be a good way to implement simple functions in Sqlite,
> the current custome function interface to native code is far prefereable for
> that but it is appropriate for implementing larger functions and one which
> require frequent user alterations.
>
> Michael Ruck wrote:
>
>>How do you treat objects containing other objects? JS has the
>>capabilities to build powerful object models with hashes etc. Objects,
>>such as these don't map nicely to the relational model.
>>
>>I'm going a different way - I'm using static HTML, which requests JSON
>>from the server and uses this to update the UI on the client side.
>>However mapping JSON to SQL is still somewhat of an issue. I've
>>tackled it by including metadata in JSON, but that's not very clean
>>and I'm not really happy with it (yet.)
>>
>>Mike
>>
>>-Ursprüngliche Nachricht-
>>Von: John Stanton [mailto:[EMAIL PROTECTED]
>>Gesendet: Freitag, 6. April 2007 18:22
>>An: sqlite-users@sqlite.org
>>Betreff: Re: AW: [sqlite] Function Language
>>
>>Thankyou for the thoughtful comments.  It strikes me that a JS object
>>and an Sqlite row map nicely.  When I was writing the part of my
>>application server which encapsulates Sqlite rows in JSON I was struck
>>by how simple the interface was, particularly compared to XML which
>>involves a little more attention to create well formed XML and a whole
>>lot more involvement to parse and generate on the client side.  Adding
>>JSON as an alternative to XML was a good idea.
>>
>>I do not try to create dynamic HTML pages using JS and use a much
>>simpler and more efficent application specific language compiled to
>>byte code (somewhat analogous to Java or VDBE bytecode).  At that
>>level JS is merged into the page in such a way that the JS is matched
>>to the browser and locale to remove redundancy.
>>
>>The sophistication in the otherwise simple application specific
>>language is an inference engine to resolve a knowledge base of rules
>>stored in the Sqlite database and an event driven capability linked to
>>the activity of the database.
>>
>>The same capability which creates dynamic HTML/Javascript will also
>&g

AW: [sqlite] SQLite and nested transactions

That Sybase and MS SQL match on their behavior is no surprise considering
their common heritage ;) I suppose MS (and sybase for that matter) hasn't
done anything on the transaction support since they've split their code
bases.

-Ursprüngliche Nachricht-
Von: Griggs, Donald [mailto:[EMAIL PROTECTED] 
Gesendet: Montag, 9. April 2007 19:50
An: sqlite-users@sqlite.org
Betreff: RE: [sqlite] SQLite and nested transactions

Regarding:  "...As Igor pointed out this does not resemble a full
implementation of transactions, as nested transactions can be committed and
rolled back independently of the outer parent transaction."


Nonetheless, it would seem, just from the couple of pages below, that some
DB vendors find the less-than-full implementation of nested
transactions to be useful for at least some purposes.   

(I.e., an implementation in which inner transactions do little more than
adjust counters.  If anything is rolled back, then entire outer transaction
is rolled back.) 

I make no claim to being an expert here -- I'm just a googler.  ;-)

http://manuals.sybase.com/onlinebooks/group-as/asg1250e/sqlug/@Generic__
BookTextView/53608;pt=53608;uf=0 

"Nested transactions
You can nest transactions within other transactions. When you nest begin
transaction and commit transaction statements, the outermost pair actually
begin and commit the transaction. The inner pairs just keep track of the
nesting level. Adaptive Server does not commit the transaction until the
commit transaction that matches the outermost begin transaction is issued.
Normally, this transaction "nesting" occurs as stored procedures or triggers
that contain begin/commit pairs call each other." 


http://msdn2.microsoft.com/en-us/library/ms189336.aspx

"SQL Server 2005 Books Online
Nesting Transactions  

Explicit transactions can be nested. This is primarily intended to support
transactions in stored procedures that can be called either from a process
already in a transaction or from processes that have no active transaction.

The following example shows the intended use of nested transactions. The
procedure TransProc enforces its transaction regardless of the transaction
mode of any process that executes it. If TransProc is called when a
transaction is active, the nested transaction in TransProc is largely
ignored, and its INSERT statements are committed or rolled back based on the
final action taken for the outer transaction. If TransProc is executed by a
process that does not have an outstanding transaction, the COMMIT
TRANSACTION at the end of the procedure effectively commits the INSERT
statements..."


[opinions are my own, not necessarily those of my company]


-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] SQLite and nested transactions

Yes, but this violates ACID principles. As Igor pointed out this does not
resemble a full implementation of transactions, as nested transactions can
be commited and rolled back independently of the outer parent transaction.

Mike

-Ursprüngliche Nachricht-
Von: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Gesendet: Montag, 9. April 2007 18:38
An: sqlite-users@sqlite.org
Cc: Dennis Cote
Betreff: Re: [sqlite] SQLite and nested transactions


So. If i read this code right.
* The transaction_level keeps track of how many nested transactions have
occurred.
* Only the parent transaction allows the commit.

In this case, only a single journal is required (the parent journal).
Thanks,
Ray


 Dennis Cote <[EMAIL PROTECTED]> wrote: 
> Darren Duncan wrote:
> >
> > I will clarify that child transactions are just an elegant way of 
> > partitioning a larger task, and that parent transactions always 
> > overrule children; even if a child transaction commits successfully, 
> > a rollback of its parent means there are no lasting changes.
> Darren,
> 
> Because of this, and the fact that a transaction is basically a 
> guarantee that all or none of the enclosed statements are executed, it 
> is much simpler to implement nested transactions using a counter and 
> the existing transaction API in a set of wrapper functions. There is 
> no need to maintain all the intermediate state information. All you 
> need to know is the current nesting level, and if any enclosed 
> transaction was rolled back.
> 
> The following code shows the basic process.
> 
> int transaction_level = 0;
> bool transaction_failed;
> 
> void begin_nested_transaction()
> {
> if (transaction_level++ == 0) {
> transaction_failed = false;
> sqlite3_exec("begin");
> }
> }
> 
> void commit_nested_transaction()
> {
> if (--transaction_level == 0)
> if (transaction_failed)
> sqlite3_exec("rollback");
> else
> sqlite3_exec("commit");
> }
> 
> void rollback_nested_transaction()
> {
> transaction_failed = true;
> commit_transaction();
> }
> 
> HTH
> Dennis Cote
> 
> --
> --- To unsubscribe, send email to 
> [EMAIL PROTECTED]
> --
> ---
> 



-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] Master table with child FTS table

How about managing fts_table using triggers attached to the master table?
That should take care of synchronization issues IMHO.

Mike

-Ursprüngliche Nachricht-
Von: Paul Quinn [mailto:[EMAIL PROTECTED] 
Gesendet: Samstag, 7. April 2007 09:08
An: sqlite-users@sqlite.org
Betreff: [sqlite] Master table with child FTS table

I have a database setup that creates a master table, and a child 'Full Text
Search' (FTS) table that matches it by rowid. My problem is with keeping the
FTS table in sync with the master. Let me explain in SQL:

setup like so:
CREATE TABLE master (a, b);
CREATE VIRTUAL TABLE fts_table USING fts2(words);

inserts like so:
INSERT INTO master (a, b) VALUES (blah, blah); INSERT INTO fts_table (rowid,
words) VALUES (last_insert_rowid(), 'some words');

searches like so:
SELECT a, b, words FROM master JOIN fts_table ON master.rowid ==
fts_table.rowid WHERE words MATCH 'word';


In my scenario, I need to do a number of updates and deletes to the master
table and the fts_table needs to follow suit. However a number of problems
have surfaced:

FTS does not work for INSERT OR REPLACE. At least that is what I have found,
didn't find any docs on this.
So I resort to using delete/insert like so:
DELETE FROM fts_table WHERE rowid = ;
INSERT INTO fts_table (rowid, words) VALUES (,
'some words');

However, I am finding that this is not reliable. The FTS table will still go
out of sync. Row counts will not match after some indeterminant amount of
updates. It seems either some DELETE's are failing, or the INSERTs into
existing rowid's are failing (silently - the return error is always ok). And
after the out of sync problems start occuring, sometimes the FTS goes to pot
and the SELECT sql's start failing with SQLITE_ERROR.

So, is there a better way to do what I am trying to do?

I'm using SQLite 3.3.13, and I've tried both FTS1 and FTS2 (using FTS2 right
now).
Thanks in advance for any help.

-PQ



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: AW: AW: [sqlite] Function Language

If you come up with something, please share it.

Mike 

-Ursprüngliche Nachricht-
Von: John Stanton [mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 6. April 2007 20:49
An: sqlite-users@sqlite.org
Betreff: Re: AW: AW: [sqlite] Function Language

By using making the connection from browser to server an RPC model I have
mapped the interface to the database instead of trying to map the database
to the Javascript objects.  That also minimizes the network traffic.  My
application server design has a criterion that network traffic should be
minimized.  It is a great way to increase effective bandwidth.

The generated HTML/Javascript pages have no redundancy and are compressed if
they are above a threshold size.  The RPC between the browser and the server
uses sparse messages and not a great deal of redundancy.  It also minimizes
the extent of execution of the slow, interpreted Javascript by partitioning
as much basic housekeeping to the more efficient server side processing.

The server holds each database open as exclusive and shares it between users
and multiple user connections, minimizing database open and close actions,
keeping local cacheing and avoiding file locking.

Currently I am dreaming up ways of implementing the Javascript function
level in Sqlite.  Creating the JS VM when the DB opens and having one VM per
open DB instance seems to be a way of avoiding contentions and getting
fairly efficient execution and a reasonable route for an initial prototype.
Garbage collection should take care of stale objects.  I can store user
defined functions as text items in a dedicated table defining aggregate and
scalar functions in a database.  A syntax directed editor linked to JSLint
and RCS can maintain syntactically correct code with version control.

Javascript would not be a good way to implement simple functions in Sqlite,
the current custome function interface to native code is far prefereable for
that but it is appropriate for implementing larger functions and one which
require frequent user alterations.

Michael Ruck wrote:
> How do you treat objects containing other objects? JS has the 
> capabilities to build powerful object models with hashes etc. Objects, 
> such as these don't map nicely to the relational model.
> 
> I'm going a different way - I'm using static HTML, which requests JSON 
> from the server and uses this to update the UI on the client side. 
> However mapping JSON to SQL is still somewhat of an issue. I've 
> tackled it by including metadata in JSON, but that's not very clean 
> and I'm not really happy with it (yet.)
> 
> Mike
> 
> -Ursprüngliche Nachricht-
> Von: John Stanton [mailto:[EMAIL PROTECTED]
> Gesendet: Freitag, 6. April 2007 18:22
> An: sqlite-users@sqlite.org
> Betreff: Re: AW: [sqlite] Function Language
> 
> Thankyou for the thoughtful comments.  It strikes me that a JS object 
> and an Sqlite row map nicely.  When I was writing the part of my 
> application server which encapsulates Sqlite rows in JSON I was struck 
> by how simple the interface was, particularly compared to XML which 
> involves a little more attention to create well formed XML and a whole 
> lot more involvement to parse and generate on the client side.  Adding 
> JSON as an alternative to XML was a good idea.
> 
> I do not try to create dynamic HTML pages using JS and use a much 
> simpler and more efficent application specific language compiled to 
> byte code (somewhat analogous to Java or VDBE bytecode).  At that 
> level JS is merged into the page in such a way that the JS is matched 
> to the browser and locale to remove redundancy.
> 
> The sophistication in the otherwise simple application specific 
> language is an inference engine to resolve a knowledge base of rules 
> stored in the Sqlite database and an event driven capability linked to 
> the activity of the database.
> 
> The same capability which creates dynamic HTML/Javascript will also 
> generate PostScript to deliver PDF and no doubt other formats which 
> may show up in the future.
> 
> Michael Ruck wrote:
> 
>>I am all for it and am very interested in your project as I'm working 
>>on something similar. I've been using JS to create dynamic HTML pages 
>>in combination with SQLite using a JSON wrapper from this list. The 
>>only issue I see here is the treatment of JS objects - there's again 
>>the OO and relation mismatch involved. You may need some kind of OO 
>>mapper to map to SQLite tables/views.
>>
>>HTH,
>>Mike
>>
>>-Ursprüngliche Nachricht-
>>Von: John Stanton [mailto:[EMAIL PROTECTED]
>>Gesendet: Freitag, 6. April 2007 02:43
>>An: sqlite-users@sqlite.org
>>Betreff: [sqlite] Function Language
>>
>>I have been looking around at handy way to implement elaborate 
>

Re: AW: AW: [sqlite] Function Language

By using making the connection from browser to server an RPC model I 
have mapped the interface to the database instead of trying to map the 
database to the Javascript objects.  That also minimizes the network 
traffic.  My application server design has a criterion that network 
traffic should be minimized.  It is a great way to increase effective 
bandwidth.


The generated HTML/Javascript pages have no redundancy and are 
compressed if they are above a threshold size.  The RPC between the 
browser and the server uses sparse messages and not a great deal of 
redundancy.  It also minimizes the extent of execution of the slow, 
interpreted Javascript by partitioning as much basic housekeeping to the 
more efficient server side processing.


The server holds each database open as exclusive and shares it between 
users and multiple user connections, minimizing database open and close 
actions, keeping local cacheing and avoiding file locking.


Currently I am dreaming up ways of implementing the Javascript function 
level in Sqlite.  Creating the JS VM when the DB opens and having one VM 
per open DB instance seems to be a way of avoiding contentions and 
getting fairly efficient execution and a reasonable route for an initial 
prototype.  Garbage collection should take care of stale objects.  I can 
store user defined functions as text items in a dedicated table defining 
aggregate and scalar functions in a database.  A syntax directed editor 
linked to JSLint and RCS can maintain syntactically correct code with 
version control.


Javascript would not be a good way to implement simple functions in 
Sqlite, the current custome function interface to native code is far 
prefereable for that but it is appropriate for implementing larger 
functions and one which require frequent user alterations.


Michael Ruck wrote:

How do you treat objects containing other objects? JS has the capabilities
to build powerful object models with hashes etc. Objects, such as these
don't map nicely to the relational model.

I'm going a different way - I'm using static HTML, which requests JSON from
the server and uses this to update the UI on the client side. However
mapping JSON to SQL is still somewhat of an issue. I've tackled it by
including metadata in JSON, but that's not very clean and I'm not really
happy with it (yet.)

Mike

-Ursprüngliche Nachricht-
Von: John Stanton [mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 6. April 2007 18:22

An: sqlite-users@sqlite.org
Betreff: Re: AW: [sqlite] Function Language

Thankyou for the thoughtful comments.  It strikes me that a JS object and an
Sqlite row map nicely.  When I was writing the part of my application server
which encapsulates Sqlite rows in JSON I was struck by how simple the
interface was, particularly compared to XML which involves a little more
attention to create well formed XML and a whole lot more involvement to
parse and generate on the client side.  Adding JSON as an alternative to XML
was a good idea.

I do not try to create dynamic HTML pages using JS and use a much simpler
and more efficent application specific language compiled to byte code
(somewhat analogous to Java or VDBE bytecode).  At that level JS is merged
into the page in such a way that the JS is matched to the browser and locale
to remove redundancy.

The sophistication in the otherwise simple application specific language is
an inference engine to resolve a knowledge base of rules stored in the
Sqlite database and an event driven capability linked to the activity of the
database.

The same capability which creates dynamic HTML/Javascript will also generate
PostScript to deliver PDF and no doubt other formats which may show up in
the future.

Michael Ruck wrote:

I am all for it and am very interested in your project as I'm working 
on something similar. I've been using JS to create dynamic HTML pages 
in combination with SQLite using a JSON wrapper from this list. The 
only issue I see here is the treatment of JS objects - there's again 
the OO and relation mismatch involved. You may need some kind of OO 
mapper to map to SQLite tables/views.


HTH,
Mike

-Ursprüngliche Nachricht-
Von: John Stanton [mailto:[EMAIL PROTECTED]
Gesendet: Freitag, 6. April 2007 02:43
An: sqlite-users@sqlite.org
Betreff: [sqlite] Function Language

I have been looking around at handy way to implement elaborate 
functions in Sqlite.  Implementing PL/SQL came to mind but recently it 
struck me that Javascript has data rules very similar to Sqlite and 
has the useful property that all executables are just data.


Does anyone have views for or against Javascript as an embedded 
language for realizing functions?  I see as a positive its data typing 
affinity with Sqlite and its widespread usage and a large base of active


programmers.


--
--
-
To unsubscribe, send email to [EMAIL PROTECTED

Re: AW: [sqlite] Function Language

Thankyou for the thoughtful comments.  It strikes me that a JS object 
and an Sqlite row map nicely.  When I was writing the part of my 
application server which encapsulates Sqlite rows in JSON I was struck 
by how simple the interface was, particularly compared to XML which 
involves a little more attention to create well formed XML and a whole 
lot more involvement to parse and generate on the client side.  Adding 
JSON as an alternative to XML was a good idea.


I do not try to create dynamic HTML pages using JS and use a much 
simpler and more efficent application specific language compiled to byte 
code (somewhat analogous to Java or VDBE bytecode).  At that level JS is 
merged into the page in such a way that the JS is matched to the browser 
and locale to remove redundancy.


The sophistication in the otherwise simple application specific language 
is an inference engine to resolve a knowledge base of rules stored in 
the Sqlite database and an event driven capability linked to the 
activity of the database.


The same capability which creates dynamic HTML/Javascript will also 
generate PostScript to deliver PDF and no doubt other formats which may 
show up in the future.


Michael Ruck wrote:

I am all for it and am very interested in your project as I'm working on
something similar. I've been using JS to create dynamic HTML pages in
combination with SQLite using a JSON wrapper from this list. The only issue
I see here is the treatment of JS objects - there's again the OO and
relation mismatch involved. You may need some kind of OO mapper to map to
SQLite tables/views.

HTH,
Mike

-Ursprüngliche Nachricht-
Von: John Stanton [mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 6. April 2007 02:43

An: sqlite-users@sqlite.org
Betreff: [sqlite] Function Language

I have been looking around at handy way to implement elaborate functions in
Sqlite.  Implementing PL/SQL came to mind but recently it struck me that
Javascript has data rules very similar to Sqlite and has the useful property
that all executables are just data.

Does anyone have views for or against Javascript as an embedded language for
realizing functions?  I see as a positive its data typing affinity with
Sqlite and its widespread usage and a large base of active programmers.


-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] Function Language

I am all for it and am very interested in your project as I'm working on
something similar. I've been using JS to create dynamic HTML pages in
combination with SQLite using a JSON wrapper from this list. The only issue
I see here is the treatment of JS objects - there's again the OO and
relation mismatch involved. You may need some kind of OO mapper to map to
SQLite tables/views.

HTH,
Mike

-Ursprüngliche Nachricht-
Von: John Stanton [mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 6. April 2007 02:43
An: sqlite-users@sqlite.org
Betreff: [sqlite] Function Language

I have been looking around at handy way to implement elaborate functions in
Sqlite.  Implementing PL/SQL came to mind but recently it struck me that
Javascript has data rules very similar to Sqlite and has the useful property
that all executables are just data.

Does anyone have views for or against Javascript as an embedded language for
realizing functions?  I see as a positive its data typing affinity with
Sqlite and its widespread usage and a large base of active programmers.


-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] storing funky text in TEXT field

Actually UTF-8 is the better choice compared to UTF-16. I would start
turning on UTF-8 as the character set on your web server and ensure that it
is also specified as the document character set in all generated HTML pages.
This gives browsers a hint about the text encoding to use to render pages.

Mike 

-Ursprüngliche Nachricht-
Von: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Im Auftrag von P
Kishor
Gesendet: Donnerstag, 5. April 2007 20:19
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] storing funky text in TEXT field

On 4/5/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> "P Kishor" <[EMAIL PROTECTED]> wrote:
> >
> > But, the line I quoted in my OP, namely "TEXT. The value is a text 
> > string, stored using the database encoding (UTF-8, UTF-16BE or 
> > UTF-16-LE)." is confusing me. What is this "database encoding" and 
> > where do I have to tackle this?
> >
> > Do I have to turn on some kind of magic pragma setting? How? Where?
> > And, which one is preferable? UTF-8 or UTF-16, or one of the 16 
> > variants, BE/LE?
> >
>
> http://www.sqlite.org/pragma.html#pragma_encoding
>


well, that was easy. My apologies for coming so close to the answer myself
but not really looking.

Ok, so here is the follow-up question, which hopefully will not be so
"clueless" as the first one. I entered "pragma encoding;" for a db I already
have -- it returns UTF-8, which seems like the default setting. Which
explains why when I first created the db by inserting data from an existing
file, I got all the funky letters. But, when I updated the same data via my
web app, those funky letters got clobbered. So, I have to create a better
web application. However, I do want a starting advice -- is UTF-8 good
enough for what I am concerned about or should I start learning about UTF-16
BE/LE?

--
Puneet Kishor http://punkish.eidesis.org/ Nelson Inst. for Env. Studies,
UW-Madison http://www.nelson.wisc.edu/ Open Source Geospatial Foundation
http://www.osgeo.org/education/
-
collaborate, communicate, compete
=


-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: AW: [sqlite] Soft search in database

I built something like that where each word was translated into a token 
and a key built from the token and the position of the word and used to 
build a tree.  The tree access was fast and could probably be adapted to 
produce strict ranking by position.  The complexity of the method is the 
need for a dictionary to use for conversion from word to token.


Martin Pfeifle wrote:

Unfortunately, the fts module of sqlite does not support "fuzzy text search = google 
search".
What you first need is a similarity measure between strings, e.g. the 
Edit-distance.
Based on such a similarity measure, you could build up an appropriate index 
structure,
e.g.  a Relational M-tree (cf. 
deposit.ddb.de/cgi-bin/dokserv?idn=972667849_var=d1_ext=pdf=972667849.pdf
 Chapter 10.3)
Such a module should not only support range queries, e.g. give me all strings 
which have a distance smaller than eps to my query string, but also ranked 
nearest neighbor queries.
 
We also urgently need such a module, and think about implementing it on our own. I would appreciate if efforts could be synchronized. 
Best Martin



- Ursprüngliche Mail 
Von: Michael Schlenker <[EMAIL PROTECTED]>
An: sqlite-users@sqlite.org
Gesendet: Dienstag, den 6. März 2007, 09:46:52 Uhr
Betreff: Re: [sqlite] Soft search in database


Henrik Ræder schrieb:


 Hi

 (First post - hope it's an appropriate place)

 I've been implementing a database of a few MB of text (indexing
magazines) in SQLite, and so far have found it to work really well.

 Now my boss, who has a wonderfully creative mind, asks me to implement a
full-text search function which is not the usual simplistic 'found' /
'not found', but more Google-style where a graded list of results is 
returned.


 For example, in a search for "MP3 Player", results with the phrases next
to each other would get a high rating, as would records with a high
occurance of the keywords.

 This falls outside the usual scope of SQL, but would still seem a
relatively common problem to tackle.

 Any ideas (pointers) how to tackle this?


You have come to the right place.

Take a closer look at:
http://www.sqlite.org/cvstrac/wiki?p=FullTextIndex

Michael




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] Soft search in database

Unfortunately, the fts module of sqlite does not support "fuzzy text search = 
google search".
What you first need is a similarity measure between strings, e.g. the 
Edit-distance.
Based on such a similarity measure, you could build up an appropriate index 
structure,
e.g.  a Relational M-tree (cf. 
deposit.ddb.de/cgi-bin/dokserv?idn=972667849_var=d1_ext=pdf=972667849.pdf
 Chapter 10.3)
Such a module should not only support range queries, e.g. give me all strings 
which have a distance smaller than eps to my query string, but also ranked 
nearest neighbor queries.
 
We also urgently need such a module, and think about implementing it on our 
own. I would appreciate if efforts could be synchronized. 
Best Martin


- Ursprüngliche Mail 
Von: Michael Schlenker <[EMAIL PROTECTED]>
An: sqlite-users@sqlite.org
Gesendet: Dienstag, den 6. März 2007, 09:46:52 Uhr
Betreff: Re: [sqlite] Soft search in database


Henrik Ræder schrieb:
>   Hi
> 
>   (First post - hope it's an appropriate place)
> 
>   I've been implementing a database of a few MB of text (indexing
> magazines) in SQLite, and so far have found it to work really well.
> 
>   Now my boss, who has a wonderfully creative mind, asks me to implement a
> full-text search function which is not the usual simplistic 'found' /
> 'not found', but more Google-style where a graded list of results is 
> returned.
> 
>   For example, in a search for "MP3 Player", results with the phrases next
> to each other would get a high rating, as would records with a high
> occurance of the keywords.
> 
>   This falls outside the usual scope of SQL, but would still seem a
> relatively common problem to tackle.
> 
>   Any ideas (pointers) how to tackle this?
You have come to the right place.

Take a closer look at:
http://www.sqlite.org/cvstrac/wiki?p=FullTextIndex

Michael

-- 
Michael Schlenker
Software Engineer

CONTACT Software GmbH   Tel.:   +49 (421) 20153-80
Wiener Straße 1-3   Fax:+49 (421) 20153-41
28359 Bremen
http://www.contact.de/  E-Mail: [EMAIL PROTECTED]

Sitz der Gesellschaft: Bremen | Geschäftsführer: Karl Heinz Zachries
Eingetragen im Handelsregister des Amtsgerichts Bremen unter HRB 13215

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



___ 
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de

AW: [sqlite] Obtaining randomness on win32

> randomness as you need.  But I do not know how to do this on
> win32 and wince.  The current implementation seeds the random

As Michael already suggested, you should use the CryptoAPI
(CryptAquireContext, CryptGenRandom). This API is supported by all 32
bit desktop versions and by Windows CE starting with version 2.10.

Christian

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] Obtaining randomness on win32

I'm not sure if this helps, but QueryPerformanceCounter could be
a source of semirandom 64-bit integers. It returns the processors
running time in nanoseconds. I'm not aware of anything, which returns
really random values.

On Windows itself you could use the CryptAcquireContext, CryptGenRandom 
and CryptReleaseContext. I think that doesn't work on WinCE though.

Michael

> -Ursprüngliche Nachricht-
> Von: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
> Gesendet: Montag, 29. Januar 2007 17:15
> An: sqlite-users@sqlite.org
> Betreff: [sqlite] Obtaining randomness on win32
> 
> The pseudo-random number generator (PRNG) in SQLite is 
> becoming more important so it seem good to make sure it is 
> well seeded.  On Unix this is easy - just open /dev/urandom 
> and read out as much randomness as you need.  But I do not 
> know how to do this on
> win32 and wince.  The current implementation seeds the random 
> number generator on these platforms by grabbing a copy of the 
> current system time.  See the sqlite3WinRandomSeed() function 
> in os_win.c for details.  This is not a very good method for 
> seeding a PRNG.
> 
> Can someone with more knowledge of win32 and wince please 
> suggest a better method for seeding the PRNG on those platforms?
> 
> --
> D. Richard Hipp  <[EMAIL PROTECTED]>
> 
> 
> --
> ---
> To unsubscribe, send email to [EMAIL PROTECTED]
> --
> ---
> 
> 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] Of shared cache, table locks and transactions

Yes, I second this opinion. However I believe sqlite is ACID, just not when
shared cache mode is enabled...

Mike 

-Ursprüngliche Nachricht-
Von: Ken [mailto:[EMAIL PROTECTED] 
Gesendet: Sonntag, 14. Januar 2007 17:00
An: sqlite-users@sqlite.org; [EMAIL PROTECTED]
Betreff: Re: [sqlite] Of shared cache, table locks and transactions

Yes exactly my point for some time now!!!
   
  I think it would be nice if sqlite could optionally maintain a pre-write
state version of the cached page in memory. This really means maintaining
some form of page versioning, which is already done via the pager writing to
the journal. The pager must write the original page that undergoes
modification to the journal file before it can manipulate the cached
version. 
   
  To expedite performance the journaled pages could be maintained in memory
as well as written to disk. That way a writer and reader could co-exist. 
 Writer modifies a page, (acquiring a Write page lock). 
  Make a copy in memory, saving a reference to this via the cache as
well as its offset in the journal. 
 The origainal Page is then written to the disk journal.
 If the Journal cache exceeds memory capacity, Just release pages and
retain a file offset pointer in memory.
   
 The reader when encountering a locked page, could then check the cached
journal pages.  
 If not found in the cache use the file offset reference and read this
in from the journal file.
  
  This would take care of the simple case of writer blocking! As there is
only ever 1 writer. The original page is sufficient to take care of
(isolation) in ACID.
   
  As it stands today, sqlite imho, is  ACD, it is not have isolated
transactional capabilities.
   
  
Dan Kennedy <[EMAIL PROTECTED]> wrote:
  On Sat, 2007-01-13 at 23:55 -0800, Peter James wrote:
> Hey folks...
> 
> I have a situation that caused me a little head-scratching and I'm 
> wondering if it's intended behavior or not.
> 
> I'm running a server thread (roughly based on test_server.c in the 
> distro) on top of the 3.3.6 library. The effectve call sequence in 
> question (all from one thread) looks something like this:
> 
> sqlite3_open("/path/to/db", _one);
> 
> sqlite3_prepare(db_one, "CREATE TABLE foo (id);", -1, _one, 
> NULL); sqlite3_step(stmt_one); sqlite3_finalize(stmt_one);
> 
> sqlite3_prepare(db_one, "BEGIN DEFERRED;", -1, _one, NULL); 
> sqlite3_step(stmt_one); sqlite3_finalize(stmt_one);
> 
> sqlite3_prepare(db_one, "INSERT INTO foo VALUES (123);", -1, 
> _one, NULL); sqlite3_step(stmt_one); sqlite3_finalize(stmt_one);
> 
> sqlite3_prepare(db_one, "SELECT count(*) FROM foo;", -1, _one, 
> NULL); sqlite3_step(stmt_one); // point of interest #1 
> sqlite3_column_int(stmt_one, 0); sqlite3_finalize(stmt_one);
> 
> // new connection here, previous transaction still pending...
> sqlite3_open("/path/to/db", _two);
> 
> sqlite3_prepare(db, "SELECT count(*) FROM foo;", -1, _two, NULL); 
> // point of interest #2 sqlite3_step(stmt_two); // point of interest 
> #3 sqlite3_column_int(stmt_two, 0); sqlite3_finalize(stmt_two);
> 
> If shared cache is DISabled, then I get "1" on the first point of 
> interest and "0" on the third point of interest, which is what I'd 
> expect. The database file is at a RESERVED lock state in both 
> locations, and the first point of interest gets uncommitted data since 
> it's in the same connection, while the second point of interest can't 
> yet see that data since it's a different connection and the transaction is
not yet committed.
> 
> On the other hand, if shared cache is ENabled, then I get "1" on the 
> first point of interest and SQLITE_LOCKED at the second point of 
> interest. This would seem to indicate an actual degradation of 
> concurrency by using shared caching. Without shared caching, readers 
> in the same thread as a pending writer are allowed. With shared 
> caching, they are not. The EXPLAIN output seems to confirm that this 
> is a result of the binary nature of table locks vs. the staged nature of
sqlite file locks.

Here's how I figure it:

When the shared-cache was DISabled, the first connection loaded it's own
cache and then modified it (the INSERT statement). No writing to the disk
has occured at this point, only the cache owned by the first connection. 

The second connection then loaded up it's own cache (from the file on
disk) and queried it. No problem here.

However when the shared-cache was ENabled the second connection piggy-backed
onto (i.e shares) the cache opened by the first connection.
Since the pages corresponding to table "foo" in this cache contain
uncommitted modifications, SQLite prevents the second connection from
reading them - returning SQLITE_LOCKED. Otherwise, the second connection
would be reading uncommitted data.

So you're right, when you use shared-cache mode there is less concurrency in
some circumstances.

Dan.



-
To unsubscribe, 

AW: [sqlite] Equivalent of OLE object da

An OLE object is persisted into a stream of bytes. You can store OLE objects
into SQLite as a BLOB, but you need to make your own (specialized)
implementation of one of the IPersistXXX interfaces (most likely
IPersistStream), which stores the object into an SQLite column/reads a
serialized object from an SQLite column.

Michael

-Ursprüngliche Nachricht-
Von: shivaranjani [mailto:[EMAIL PROTECTED] 
Gesendet: Dienstag, 9. Januar 2007 05:57
An: sqlite-users@sqlite.org
Betreff: [sqlite] Equivalent of OLE object da

Hi all,

 

 

Is there any equivalent of OLE object datatype of Access in SQlite dll.

 

 

Regards,

 

A. Shivaranjani



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: AW: [sqlite] sqlite performance, locking & threading

> not to spark a debate

Although the majority of this thread is as clear as mud, it is still
interesting, even for simple VBA programmers like me that have no chance
(maybe via a VB6 ActiveX exe) to use multi-threading.

RBS


> Emerson, one posts to a forum like this to get help and other ideas, not
> to spark a debate.  Many talented people gave you some of their time to
> help you solve your problem and one in particular gave you a well
> conceived and executed piece of software free of charge.  Appreciate
> their charity.
>
> If you have some insights which have escaped the rest of us, implement
> them and present the results to the world and dazzle us.
>
> BTW, a major advantage of Open Source software is that you do not need
> to have people explain it to you.  You have the source and that explains
> everything.  That is particularly so with Sqlite, which is clearly
> written and consequently the source reads like a book.  A few minutes
> with the source and grep and you have your answers.
>
> Emerson Clarke wrote:
>> John,
>>
>> Um, alright then...
>>
>> But i think your preaching to the converted, simplifying things is
>> what i always try to do.  And not just when theres a problem
>>
>> If you followed the thread fully you would realise that there was
>> never a problem with my design, though that didnt stop many people
>> from chirping in and offering suggestions.
>>
>> The problem i had was with sqlite not being compatible with the simple
>> design that i wanted.  I did try several alternate designs, but only
>> as a way of working around the problem i had with sqlite.  It took a
>> long time but eventually i managed to get someone to explain why
>> sqlite had that particular problem, and i was able to modify the
>> sqlite source to resolve the issue.
>>
>> Unfortunately no one has yet commented on my solution, or the problem
>> which it addresses.   Basically sqlite has thread safety checking
>> routines which work a little like mutexe's.  Every time you enter a
>> bit of code which is potentially thread unsafe it sets a magic number,
>> then resets it when it comes out.  This is an attempt to detect when
>> two threads are accessing the same bit of code at the same time.
>>
>> Clearly its not 100% reliable, and is subject to all kinds of thread
>> races, but it does provide some measure of protection.  Unfortunately
>> though, the way it has been coded, an unbalanced safety check is
>> performed in the sqlite3_step() function.
>>
>> This is equivalent to entering a mutex but never leaving, which causes
>> deadlock in a multithreaded program.  Only in this situation sqlite
>> throws a misuse error any time two or more threads use sqlite3_step()
>> at the same time, even if those threads are synchronised and perfectly
>> safe.
>>
>> The easy solution is to disable the safety checks, the propper
>> solution is to balance out the checks in sqlite3_step() so that users
>> who actually wish to use sqlite in a multithreaded program are free to
>> synchronise access to the api without error and there is still a
>> reasonable level of safety checking for users who do not synchronise
>> properly.
>>
>>
>> Emerson
>>
>> On 1/5/07, John Stanton <[EMAIL PROTECTED]> wrote:
>>
>>> Work on turning "reasonable" into "adequate" or "good" and it will help
>>> you get an intuitive feel for the design of programs such as yours.
>>> Then your programs will be simple, fast and robust, as Einstein
>>> counselled - "Make it as simple a possible, but no simpler".
>>>
>>> I also suggest that you take Niklaus Wirth's advice and when you run
>>> into a difficulty backtrack your work and scrap everything until you
>>> reach a point where there are no problems and start again from that
>>> point taking a different approach having learned a lesson from your
>>> previous attempt.
>>>
>>> By the way, I doubt whether you are using a modern operating system, it
>>> is most likely to be old technology like Windows or Linux.  Neither
>>> supports much in the way of parallelism.
>>>
>>
>> -
>>
>> To unsubscribe, send email to [EMAIL PROTECTED]
>> -
>>
>>
>
>
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
>
>
>




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: AW: [sqlite] sqlite performance, locking & threading


Can we please stop this thread?

John Stanton wrote:
Emerson, one posts to a forum like this to get help and other ideas, not 
to spark a debate.  Many talented people gave you some of their time to 
help you solve your problem and one in particular gave you a well 
conceived and executed piece of software free of charge.  Appreciate 
their charity.


If you have some insights which have escaped the rest of us, implement 
them and present the results to the world and dazzle us.


BTW, a major advantage of Open Source software is that you do not need 
to have people explain it to you.  You have the source and that explains 
everything.  That is particularly so with Sqlite, which is clearly 
written and consequently the source reads like a book.  A few minutes 
with the source and grep and you have your answers.


Emerson Clarke wrote:

John,

Um, alright then...

But i think your preaching to the converted, simplifying things is
what i always try to do.  And not just when theres a problem

If you followed the thread fully you would realise that there was
never a problem with my design, though that didnt stop many people
from chirping in and offering suggestions.

The problem i had was with sqlite not being compatible with the simple
design that i wanted.  I did try several alternate designs, but only
as a way of working around the problem i had with sqlite.  It took a
long time but eventually i managed to get someone to explain why
sqlite had that particular problem, and i was able to modify the
sqlite source to resolve the issue.

Unfortunately no one has yet commented on my solution, or the problem
which it addresses.   Basically sqlite has thread safety checking
routines which work a little like mutexe's.  Every time you enter a
bit of code which is potentially thread unsafe it sets a magic number,
then resets it when it comes out.  This is an attempt to detect when
two threads are accessing the same bit of code at the same time.

Clearly its not 100% reliable, and is subject to all kinds of thread
races, but it does provide some measure of protection.  Unfortunately
though, the way it has been coded, an unbalanced safety check is
performed in the sqlite3_step() function.

This is equivalent to entering a mutex but never leaving, which causes
deadlock in a multithreaded program.  Only in this situation sqlite
throws a misuse error any time two or more threads use sqlite3_step()
at the same time, even if those threads are synchronised and perfectly
safe.

The easy solution is to disable the safety checks, the propper
solution is to balance out the checks in sqlite3_step() so that users
who actually wish to use sqlite in a multithreaded program are free to
synchronise access to the api without error and there is still a
reasonable level of safety checking for users who do not synchronise
properly.


Emerson

On 1/5/07, John Stanton <[EMAIL PROTECTED]> wrote:


Work on turning "reasonable" into "adequate" or "good" and it will help
you get an intuitive feel for the design of programs such as yours.
Then your programs will be simple, fast and robust, as Einstein
counselled - "Make it as simple a possible, but no simpler".

I also suggest that you take Niklaus Wirth's advice and when you run
into a difficulty backtrack your work and scrap everything until you
reach a point where there are no problems and start again from that
point taking a different approach having learned a lesson from your
previous attempt.

By the way, I doubt whether you are using a modern operating system, it
is most likely to be old technology like Windows or Linux.  Neither
supports much in the way of parallelism.



- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 









-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: AW: [sqlite] sqlite performance, locking & threading

Emerson, one posts to a forum like this to get help and other ideas, not 
to spark a debate.  Many talented people gave you some of their time to 
help you solve your problem and one in particular gave you a well 
conceived and executed piece of software free of charge.  Appreciate 
their charity.


If you have some insights which have escaped the rest of us, implement 
them and present the results to the world and dazzle us.


BTW, a major advantage of Open Source software is that you do not need 
to have people explain it to you.  You have the source and that explains 
everything.  That is particularly so with Sqlite, which is clearly 
written and consequently the source reads like a book.  A few minutes 
with the source and grep and you have your answers.


Emerson Clarke wrote:

John,

Um, alright then...

But i think your preaching to the converted, simplifying things is
what i always try to do.  And not just when theres a problem

If you followed the thread fully you would realise that there was
never a problem with my design, though that didnt stop many people
from chirping in and offering suggestions.

The problem i had was with sqlite not being compatible with the simple
design that i wanted.  I did try several alternate designs, but only
as a way of working around the problem i had with sqlite.  It took a
long time but eventually i managed to get someone to explain why
sqlite had that particular problem, and i was able to modify the
sqlite source to resolve the issue.

Unfortunately no one has yet commented on my solution, or the problem
which it addresses.   Basically sqlite has thread safety checking
routines which work a little like mutexe's.  Every time you enter a
bit of code which is potentially thread unsafe it sets a magic number,
then resets it when it comes out.  This is an attempt to detect when
two threads are accessing the same bit of code at the same time.

Clearly its not 100% reliable, and is subject to all kinds of thread
races, but it does provide some measure of protection.  Unfortunately
though, the way it has been coded, an unbalanced safety check is
performed in the sqlite3_step() function.

This is equivalent to entering a mutex but never leaving, which causes
deadlock in a multithreaded program.  Only in this situation sqlite
throws a misuse error any time two or more threads use sqlite3_step()
at the same time, even if those threads are synchronised and perfectly
safe.

The easy solution is to disable the safety checks, the propper
solution is to balance out the checks in sqlite3_step() so that users
who actually wish to use sqlite in a multithreaded program are free to
synchronise access to the api without error and there is still a
reasonable level of safety checking for users who do not synchronise
properly.


Emerson

On 1/5/07, John Stanton <[EMAIL PROTECTED]> wrote:


Work on turning "reasonable" into "adequate" or "good" and it will help
you get an intuitive feel for the design of programs such as yours.
Then your programs will be simple, fast and robust, as Einstein
counselled - "Make it as simple a possible, but no simpler".

I also suggest that you take Niklaus Wirth's advice and when you run
into a difficulty backtrack your work and scrap everything until you
reach a point where there are no problems and start again from that
point taking a different approach having learned a lesson from your
previous attempt.

By the way, I doubt whether you are using a modern operating system, it
is most likely to be old technology like Windows or Linux.  Neither
supports much in the way of parallelism.



- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: AW: [sqlite] sqlite performance, locking & threading


John,

Um, alright then...

But i think your preaching to the converted, simplifying things is
what i always try to do.  And not just when theres a problem

If you followed the thread fully you would realise that there was
never a problem with my design, though that didnt stop many people
from chirping in and offering suggestions.

The problem i had was with sqlite not being compatible with the simple
design that i wanted.  I did try several alternate designs, but only
as a way of working around the problem i had with sqlite.  It took a
long time but eventually i managed to get someone to explain why
sqlite had that particular problem, and i was able to modify the
sqlite source to resolve the issue.

Unfortunately no one has yet commented on my solution, or the problem
which it addresses.   Basically sqlite has thread safety checking
routines which work a little like mutexe's.  Every time you enter a
bit of code which is potentially thread unsafe it sets a magic number,
then resets it when it comes out.  This is an attempt to detect when
two threads are accessing the same bit of code at the same time.

Clearly its not 100% reliable, and is subject to all kinds of thread
races, but it does provide some measure of protection.  Unfortunately
though, the way it has been coded, an unbalanced safety check is
performed in the sqlite3_step() function.

This is equivalent to entering a mutex but never leaving, which causes
deadlock in a multithreaded program.  Only in this situation sqlite
throws a misuse error any time two or more threads use sqlite3_step()
at the same time, even if those threads are synchronised and perfectly
safe.

The easy solution is to disable the safety checks, the propper
solution is to balance out the checks in sqlite3_step() so that users
who actually wish to use sqlite in a multithreaded program are free to
synchronise access to the api without error and there is still a
reasonable level of safety checking for users who do not synchronise
properly.


Emerson

On 1/5/07, John Stanton <[EMAIL PROTECTED]> wrote:

Work on turning "reasonable" into "adequate" or "good" and it will help
you get an intuitive feel for the design of programs such as yours.
Then your programs will be simple, fast and robust, as Einstein
counselled - "Make it as simple a possible, but no simpler".

I also suggest that you take Niklaus Wirth's advice and when you run
into a difficulty backtrack your work and scrap everything until you
reach a point where there are no problems and start again from that
point taking a different approach having learned a lesson from your
previous attempt.

By the way, I doubt whether you are using a modern operating system, it
is most likely to be old technology like Windows or Linux.  Neither
supports much in the way of parallelism.



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: AW: [sqlite] sqlite performance, locking & threading

Work on turning "reasonable" into "adequate" or "good" and it will help 
you get an intuitive feel for the design of programs such as yours. 
Then your programs will be simple, fast and robust, as Einstein 
counselled - "Make it as simple a possible, but no simpler".


I also suggest that you take Niklaus Wirth's advice and when you run 
into a difficulty backtrack your work and scrap everything until you 
reach a point where there are no problems and start again from that 
point taking a different approach having learned a lesson from your 
previous attempt.


By the way, I doubt whether you are using a modern operating system, it 
is most likely to be old technology like Windows or Linux.  Neither 
supports much in the way of parallelism.


Emerson Clarke wrote:

John,

I have a reasonable understanding of the PC architecture, and more
appropriately the architecture which the operating system presents to
software.  The PC may be a serial device, but a modern operating
system with its multitasking shcheduler attempts to emulate a non
serial environment.  It devotes a certain amount of time to each
thread and then moves on.

Wether or not you are working on a highly pyshically parallel
architecture or not makes no difference, the design principles are the
same.  You should still build your software so that it is capable of
taking advantage of the environment that the operating system
presents.  It is the operating system you should be designing for, not
the hardware...

As it happens, the newest generation of PC's are all multi-core, and i
have been working on multi processor environments for many years.

Emerson

On 1/4/07, John Stanton <[EMAIL PROTECTED]> wrote:


If Emeroson intuitively understood the essential architecture of the PC
he is using he would not be having difficulty with his concept of how to
use it.  It is essentially a serial device, multi-tasking device and
parallelism in the forms of threading and multi processing is a
sophistication added with a high overhead.

I recollect an insightful CS professor impressing on his his students
the concept by explaining to them that the machines on their desks were
descended from a device invented to be a gas pump controller.

A machine designed from first principles to manage parrallel processing
would be very different.

Michael Ruck wrote:
> Hi Emerson,
>
> I just hope you don't reinvent the wheel ;) I haven't yet had the 
need to
> index things the way you describe it. May be I should take that as 
one of my

> next pet projects to get a handle on this type of task.
>
> The problem as I see it is basically, that any way you design this: 
If the
> storage tasks take 90% of your indexing time, then any 
parallelization may

> be a waste of effort. Even if you use a synchronization object you're
> essentially serializing things in a (complicated) multithreaded way...
>
> As far as static initialization: That it occurs before main() and is 
out of
> your control was the point I was getting across. That's why I wrote 
that

> this type of initialization should be avoided, unless there's no better
> design for it.
>
> Michael
>
> -Ursprüngliche Nachricht-
> Von: Emerson Clarke [mailto:[EMAIL PROTECTED]
> Gesendet: Mittwoch, 3. Januar 2007 20:31
> An: sqlite-users@sqlite.org
> Betreff: Re: [sqlite] sqlite performance, locking & threading
>
> Michael,
>
> Thanks for the advice.  During the indexing process i need to select 
and

> optionally insert records into a table so i cant ignore the outcomes.
>
> Basically the indexing process does compression, so for each 
document it
> inserts words into a table and looks up keys.  Every word in the 
document

> gets swapped with a key, and new keys are inserted as needed.
>
> There are some problems with splitting the work up in a different 
way as you
> suggested. I would either end up with a lot of queues or i would 
have to
> stagger the work so that the entire data set gets processed in 
stages which
> doesnt scale very well and isnt particularly fault tollerant.  When 
building
> an index, you want the structure to be built up progressively, so 
that you

> can pause the process and resume it later on whilst still having useful
> results.
>
> I would be worried that in a queued design, the overhead and 
bottlenecks
> caused by the buffering, message passing, and context switching 
would reduce

> the performance to that of a single thread.
> Especially since the database operations represent 90% of the work, 
all you
> would really be doing is attempting to serialise things in a 
multithreaded

> way.
>
> Im sure having worked on multithreaded systems you appreciate that 
sometimes
> simple designs are better, and i think i have a pretty good handle 
on what

> it is that im trying to do.
>
> You never have control over static initialisation, it happens before 
main().
> If i was writing very specific code to suit just this situation then 
maybe
> as you say i wouldnt need to worry about it.  But 

Re: AW: [sqlite] sqlite performance, locking & threading


John,

I have a reasonable understanding of the PC architecture, and more
appropriately the architecture which the operating system presents to
software.  The PC may be a serial device, but a modern operating
system with its multitasking shcheduler attempts to emulate a non
serial environment.  It devotes a certain amount of time to each
thread and then moves on.

Wether or not you are working on a highly pyshically parallel
architecture or not makes no difference, the design principles are the
same.  You should still build your software so that it is capable of
taking advantage of the environment that the operating system
presents.  It is the operating system you should be designing for, not
the hardware...

As it happens, the newest generation of PC's are all multi-core, and i
have been working on multi processor environments for many years.

Emerson

On 1/4/07, John Stanton <[EMAIL PROTECTED]> wrote:

If Emeroson intuitively understood the essential architecture of the PC
he is using he would not be having difficulty with his concept of how to
use it.  It is essentially a serial device, multi-tasking device and
parallelism in the forms of threading and multi processing is a
sophistication added with a high overhead.

I recollect an insightful CS professor impressing on his his students
the concept by explaining to them that the machines on their desks were
descended from a device invented to be a gas pump controller.

A machine designed from first principles to manage parrallel processing
would be very different.

Michael Ruck wrote:
> Hi Emerson,
>
> I just hope you don't reinvent the wheel ;) I haven't yet had the need to
> index things the way you describe it. May be I should take that as one of my
> next pet projects to get a handle on this type of task.
>
> The problem as I see it is basically, that any way you design this: If the
> storage tasks take 90% of your indexing time, then any parallelization may
> be a waste of effort. Even if you use a synchronization object you're
> essentially serializing things in a (complicated) multithreaded way...
>
> As far as static initialization: That it occurs before main() and is out of
> your control was the point I was getting across. That's why I wrote that
> this type of initialization should be avoided, unless there's no better
> design for it.
>
> Michael
>
> -Ursprüngliche Nachricht-
> Von: Emerson Clarke [mailto:[EMAIL PROTECTED]
> Gesendet: Mittwoch, 3. Januar 2007 20:31
> An: sqlite-users@sqlite.org
> Betreff: Re: [sqlite] sqlite performance, locking & threading
>
> Michael,
>
> Thanks for the advice.  During the indexing process i need to select and
> optionally insert records into a table so i cant ignore the outcomes.
>
> Basically the indexing process does compression, so for each document it
> inserts words into a table and looks up keys.  Every word in the document
> gets swapped with a key, and new keys are inserted as needed.
>
> There are some problems with splitting the work up in a different way as you
> suggested. I would either end up with a lot of queues or i would have to
> stagger the work so that the entire data set gets processed in stages which
> doesnt scale very well and isnt particularly fault tollerant.  When building
> an index, you want the structure to be built up progressively, so that you
> can pause the process and resume it later on whilst still having useful
> results.
>
> I would be worried that in a queued design, the overhead and bottlenecks
> caused by the buffering, message passing, and context switching would reduce
> the performance to that of a single thread.
> Especially since the database operations represent 90% of the work, all you
> would really be doing is attempting to serialise things in a multithreaded
> way.
>
> Im sure having worked on multithreaded systems you appreciate that sometimes
> simple designs are better, and i think i have a pretty good handle on what
> it is that im trying to do.
>
> You never have control over static initialisation, it happens before main().
> If i was writing very specific code to suit just this situation then maybe
> as you say i wouldnt need to worry about it.  But im also writing a database
> api, and that api is used for many different things.  My considderations are
> not just for this one problem, but also for the best general way to code the
> api so that it is safe and efficient in all circumstances.  So far the
> client/server design is the only way i can achieve true thread safety.
>
> If i could work out why sqlite3_step() causes problems across multiple
> threads i could probably make things a little faster and i could do away
> with the need for a client/server design.
>
> Emerson
>
>
> On 1/3/07, Michael Ruck <[EMAIL PROTECTED]> wrote:
>
>>Emerson,
>>
>>Now I understand your current implementation.  You seemingly only
>>partially split up the work in your code. I'd schedule the database
>>operation and not wait on the outcome, but start on the 

Re: AW: [sqlite] sqlite performance, locking & threading

If Emeroson intuitively understood the essential architecture of the PC 
he is using he would not be having difficulty with his concept of how to 
use it.  It is essentially a serial device, multi-tasking device and 
parallelism in the forms of threading and multi processing is a 
sophistication added with a high overhead.


I recollect an insightful CS professor impressing on his his students 
the concept by explaining to them that the machines on their desks were 
descended from a device invented to be a gas pump controller.


A machine designed from first principles to manage parrallel processing 
would be very different.


Michael Ruck wrote:

Hi Emerson,

I just hope you don't reinvent the wheel ;) I haven't yet had the need to
index things the way you describe it. May be I should take that as one of my
next pet projects to get a handle on this type of task.

The problem as I see it is basically, that any way you design this: If the
storage tasks take 90% of your indexing time, then any parallelization may
be a waste of effort. Even if you use a synchronization object you're
essentially serializing things in a (complicated) multithreaded way...

As far as static initialization: That it occurs before main() and is out of
your control was the point I was getting across. That's why I wrote that
this type of initialization should be avoided, unless there's no better
design for it.

Michael

-Ursprüngliche Nachricht-
Von: Emerson Clarke [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 3. Januar 2007 20:31

An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] sqlite performance, locking & threading

Michael,

Thanks for the advice.  During the indexing process i need to select and
optionally insert records into a table so i cant ignore the outcomes.

Basically the indexing process does compression, so for each document it
inserts words into a table and looks up keys.  Every word in the document
gets swapped with a key, and new keys are inserted as needed.

There are some problems with splitting the work up in a different way as you
suggested. I would either end up with a lot of queues or i would have to
stagger the work so that the entire data set gets processed in stages which
doesnt scale very well and isnt particularly fault tollerant.  When building
an index, you want the structure to be built up progressively, so that you
can pause the process and resume it later on whilst still having useful
results.

I would be worried that in a queued design, the overhead and bottlenecks
caused by the buffering, message passing, and context switching would reduce
the performance to that of a single thread.
Especially since the database operations represent 90% of the work, all you
would really be doing is attempting to serialise things in a multithreaded
way.

Im sure having worked on multithreaded systems you appreciate that sometimes
simple designs are better, and i think i have a pretty good handle on what
it is that im trying to do.

You never have control over static initialisation, it happens before main().
If i was writing very specific code to suit just this situation then maybe
as you say i wouldnt need to worry about it.  But im also writing a database
api, and that api is used for many different things.  My considderations are
not just for this one problem, but also for the best general way to code the
api so that it is safe and efficient in all circumstances.  So far the
client/server design is the only way i can achieve true thread safety.

If i could work out why sqlite3_step() causes problems across multiple
threads i could probably make things a little faster and i could do away
with the need for a client/server design.

Emerson


On 1/3/07, Michael Ruck <[EMAIL PROTECTED]> wrote:


Emerson,

Now I understand your current implementation.  You seemingly only 
partially split up the work in your code. I'd schedule the database 
operation and not wait on the outcome, but start on the next task. 
When the database finishes and has retrieved its result, schedule some 
work package on a third thread, which only processes the results etc. 
Split up the work in to repetitive, non blocking tasks. Use multiple 
queues and dedicated threads for parts of the operation or thread pools,


which process queues in parallel if possible.


From what I can tell you're already half way there.

I still don't see your static initialization problem, but that's 
another story. Actually I'd avoid using static initialization or 
static (singleton) instances, unless the design really requires it. 
Someone must control startup of the entire process, have that one 
(probably main/WinMain) take care that the work queues are available. 
Afterwards the order of thread starts doesn't matter... Actually it is 
non-deterministic anyway (unless you serialize this yourself.)


Michael

-Ursprüngliche Nachricht-
Von: Emerson Clarke [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 3. Januar 2007 15:14
An: sqlite-users@sqlite.org

AW: [sqlite] sqlite performance, locking & threading

Hi Emerson,

I just hope you don't reinvent the wheel ;) I haven't yet had the need to
index things the way you describe it. May be I should take that as one of my
next pet projects to get a handle on this type of task.

The problem as I see it is basically, that any way you design this: If the
storage tasks take 90% of your indexing time, then any parallelization may
be a waste of effort. Even if you use a synchronization object you're
essentially serializing things in a (complicated) multithreaded way...

As far as static initialization: That it occurs before main() and is out of
your control was the point I was getting across. That's why I wrote that
this type of initialization should be avoided, unless there's no better
design for it.

Michael

-Ursprüngliche Nachricht-
Von: Emerson Clarke [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 3. Januar 2007 20:31
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] sqlite performance, locking & threading

Michael,

Thanks for the advice.  During the indexing process i need to select and
optionally insert records into a table so i cant ignore the outcomes.

Basically the indexing process does compression, so for each document it
inserts words into a table and looks up keys.  Every word in the document
gets swapped with a key, and new keys are inserted as needed.

There are some problems with splitting the work up in a different way as you
suggested. I would either end up with a lot of queues or i would have to
stagger the work so that the entire data set gets processed in stages which
doesnt scale very well and isnt particularly fault tollerant.  When building
an index, you want the structure to be built up progressively, so that you
can pause the process and resume it later on whilst still having useful
results.

I would be worried that in a queued design, the overhead and bottlenecks
caused by the buffering, message passing, and context switching would reduce
the performance to that of a single thread.
Especially since the database operations represent 90% of the work, all you
would really be doing is attempting to serialise things in a multithreaded
way.

Im sure having worked on multithreaded systems you appreciate that sometimes
simple designs are better, and i think i have a pretty good handle on what
it is that im trying to do.

You never have control over static initialisation, it happens before main().
If i was writing very specific code to suit just this situation then maybe
as you say i wouldnt need to worry about it.  But im also writing a database
api, and that api is used for many different things.  My considderations are
not just for this one problem, but also for the best general way to code the
api so that it is safe and efficient in all circumstances.  So far the
client/server design is the only way i can achieve true thread safety.

If i could work out why sqlite3_step() causes problems across multiple
threads i could probably make things a little faster and i could do away
with the need for a client/server design.

Emerson


On 1/3/07, Michael Ruck <[EMAIL PROTECTED]> wrote:
> Emerson,
>
> Now I understand your current implementation.  You seemingly only 
> partially split up the work in your code. I'd schedule the database 
> operation and not wait on the outcome, but start on the next task. 
> When the database finishes and has retrieved its result, schedule some 
> work package on a third thread, which only processes the results etc. 
> Split up the work in to repetitive, non blocking tasks. Use multiple 
> queues and dedicated threads for parts of the operation or thread pools,
which process queues in parallel if possible.
> From what I can tell you're already half way there.
>
> I still don't see your static initialization problem, but that's 
> another story. Actually I'd avoid using static initialization or 
> static (singleton) instances, unless the design really requires it. 
> Someone must control startup of the entire process, have that one 
> (probably main/WinMain) take care that the work queues are available. 
> Afterwards the order of thread starts doesn't matter... Actually it is 
> non-deterministic anyway (unless you serialize this yourself.)
>
> Michael
>
> -Ursprüngliche Nachricht-
> Von: Emerson Clarke [mailto:[EMAIL PROTECTED]
> Gesendet: Mittwoch, 3. Januar 2007 15:14
> An: sqlite-users@sqlite.org
> Betreff: Re: [sqlite] sqlite performance, locking & threading
>
> Michael,
>
> Im not sure that atomic operations would be a suitable alternative.
> The reason why im using events/conditions is so that the client thread 
> blocks until the server thread has processed the query and returned 
> the result.  If i did not need the result then a simple queueing 
> system with atomic operations or critical sections would be fine i guess.
>
> The client thread must always block or spin until the server thread 
> has completed the query.  Critical sections cant be efficiently used 
> to notify other threads of status 

AW: [sqlite] sqlite performance, locking & threading

Emerson,

Now I understand your current implementation.  You seemingly only partially
split up the work in your code. I'd schedule the database operation and not
wait on the outcome, but start on the next task. When the database finishes
and has retrieved its result, schedule some work package on a third thread,
which only processes the results etc. Split up the work in to repetitive,
non blocking tasks. Use multiple queues and dedicated threads for parts of
the operation or thread pools, which process queues in parallel if possible.
>From what I can tell you're already half way there.

I still don't see your static initialization problem, but that's another
story. Actually I'd avoid using static initialization or static (singleton)
instances, unless the design really requires it. Someone must control
startup of the entire process, have that one (probably main/WinMain) take
care that the work queues are available. Afterwards the order of thread
starts doesn't matter... Actually it is non-deterministic anyway (unless you
serialize this yourself.)

Michael

-Ursprüngliche Nachricht-
Von: Emerson Clarke [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 3. Januar 2007 15:14
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] sqlite performance, locking & threading

Michael,

Im not sure that atomic operations would be a suitable alternative.
The reason why im using events/conditions is so that the client thread
blocks until the server thread has processed the query and returned the
result.  If i did not need the result then a simple queueing system with
atomic operations or critical sections would be fine i guess.

The client thread must always block or spin until the server thread has
completed the query.  Critical sections cant be efficiently used to notify
other threads of status change.  I did try using critical sections in this
way, by spinning until the server thread takes a lock, then blocking and
eventually waiting for the server thread to finish.  But since there is no
way to block the server thread when there is no work to do both the client
and server thread must sleep which induces context switching anyway.

If you used atomic operations, how would you get the client thread to block
and the server thread to block when it is not processing ?

Events/conditions seemed to be the best solution, the server thread never
runs when it doesnt need to and always wakes up when there is processing to
be done.

The static initialisation problem occurs becuase the server thread must be
running before anything which needs to use it.  If you have a static
instance of a class which accesses a database and it is initalised before
the static instance which controls the server thread, you have a problem.
It can be overcome using the initialise on first use idiom, as long as your
careful to protect the initalisation with atomic operations, but its still a
bit complicated.

Emerson


On 1/3/07, Michael Ruck <[EMAIL PROTECTED]> wrote:
> Hi Emerson,
>
> Another remark: On Windows using Events synchronization objects 
> involves additional kernel context switches and thus slows you down 
> more than necessary. I'd suggest using a queue, which makes use of the 
> InterlockedXXX operations (I've implemented a number of those, 
> including priority based ones - so this is possible without taking a 
> single lock.) or to use critical sections - those only take the kernel 
> context switch if there really is lock contention. If you can reduce 
> the kernel context switches, you're performance will likely increase
drastically.
>
> I also don't see the static initialization problem: The queue has to 
> be available before any thread is started. No thread has ownership of 
> the queue, except may be the main thread.
>
> Michael
>
>
> -Ursprüngliche Nachricht-
> Von: Emerson Clarke [mailto:[EMAIL PROTECTED]
> Gesendet: Mittwoch, 3. Januar 2007 00:57
> An: sqlite-users@sqlite.org
> Betreff: Re: [sqlite] sqlite performance, locking & threading
>
> Nico,
>
> I have implemented all three strategies (thead specific connections, 
> single connection multiple threads, and single thread server with 
> multiple client threads).
>
> The problem with using thread specific contexts is that you cant have 
> a single global transaction which wraps all of those contexts.  So you 
> end up having to use fine grained transactions, which decreases
performance.
>
> The single connection multiple thread alternative apparently has 
> problems with sqlite3_step being active on more than one thread at the 
> same moment, so cannot easily be used in a safe way.  But it is by far 
> the fastest and simplest alternative.
>
> The single thread server solution involves message passing between 
> threads, and even when this is done optimally with condition variables 
> (or events on
> windows) and blocking ive found that it results in a high number of 
> context switches and decreased performance.  It does however make a 
> robust basis for a 

AW: [sqlite] sqlite performance, locking & threading

Hi Emerson,

Another remark: On Windows using Events synchronization objects involves
additional kernel context switches and thus slows you down more than
necessary. I'd suggest using a queue, which makes use of the InterlockedXXX
operations (I've implemented a number of those, including priority based
ones - so this is possible without taking a single lock.) or to use critical
sections - those only take the kernel context switch if there really is lock
contention. If you can reduce the kernel context switches, you're
performance will likely increase drastically.

I also don't see the static initialization problem: The queue has to be
available before any thread is started. No thread has ownership of the
queue, except may be the main thread.

Michael


-Ursprüngliche Nachricht-
Von: Emerson Clarke [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 3. Januar 2007 00:57
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] sqlite performance, locking & threading

Nico,

I have implemented all three strategies (thead specific connections, single
connection multiple threads, and single thread server with multiple client
threads).

The problem with using thread specific contexts is that you cant have a
single global transaction which wraps all of those contexts.  So you end up
having to use fine grained transactions, which decreases performance.

The single connection multiple thread alternative apparently has problems
with sqlite3_step being active on more than one thread at the same moment,
so cannot easily be used in a safe way.  But it is by far the fastest and
simplest alternative.

The single thread server solution involves message passing between threads,
and even when this is done optimally with condition variables (or events on
windows) and blocking ive found that it results in a high number of context
switches and decreased performance.  It does however make a robust basis for
a wrapper api, since it guarantees that things will always be synchronised.
But using this arrangement can also result in various static initialisation
problems, since the single thread server must always be up and running
before anything which needs to use it.

Emerson

On 1/2/07, Nicolas Williams <[EMAIL PROTECTED]> wrote:
> On Sat, Dec 30, 2006 at 03:34:01PM +, Emerson Clarke wrote:
> > Technically sqlite is not thread safe.  [...]
>
> Solaris man pages describe APIs with requirements like SQLite's as 
> "MT-Safe with exceptions" and the exceptions are listed in the man page.
>
> That's still MT-Safe, but the caller has to play by certain rules.
>
> Anyways, this is silly.  SQLite API is MT-Safe with one exception and 
> that exception is rather ordinary, common to other APIs like it that 
> have a context object of some sort (e.g., the MIT krb5 API), and not 
> really a burden to the caller.  In exchange for this exception you get 
> an implementation of the API that is lighter weight and easier to 
> maintain than it would have been without that exception; a good 
> trade-off IMO.
>
> Coping with this exception is easy.  For example, if you have a server 
> app with multiple worker threads each of which needs a db context then 
> you could use a thread-specific key to track a per-thread db context; 
> use pthread_key_create(3C) to create the key, pthread_setspecific(3C) 
> once per-thread to associate a new db context with the calling thread, 
> and pthread_getspecific(3C) to get the calling thread's db context 
> when you need it.  If you have a protocol where you have to step a 
> statement over multiple message exchanges with a client, and you don't 
> want to have per-client threads then get a db context 
> per-client/exchange and store that and a mutext in an object that 
> represents that client/exchange.  And so on.
>
> Nico
> --
>
> --
> --- To unsubscribe, send email to 
> [EMAIL PROTECTED]
> --
> ---
>
>


-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] sqlite performance, locking & threading

Richard,

I believe his problem is this:

"Each query is allowed to complete before the other one starts, but each
thread may have multiple statements or result sets open."

The open resultsets/multiple started statements are causing him headaches.

Mike

-Ursprüngliche Nachricht-
Von: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Gesendet: Samstag, 30. Dezember 2006 17:32
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] sqlite performance, locking & threading

"Emerson Clarke" <[EMAIL PROTECTED]> wrote:
> Richard,
> 
> Are you sure we are not just getting into semantic knots here ?
> 
> Do we have the same definition of "at the same time".  I mean 
> concurrently, so that both threads use the same sqlite3 * structure, 
> within mutexes. Each query is allowed to complete before the other one 
> starts, but each thread may have multiple statements or result sets 
> open.
> 
> When i try to do this, i get api called out of sequence errors...
> 

There are around 50 test cases for this kind of behavior in the regression
test files thread1.test and thread2.test.  They all seem to work for me.

Perhaps your mutexes are not working as you expect and you are in fact
trying to use the same database connection simultaneously in two or more
threads.  SQLite attempts to detect this situation and when it sees it it
return SQLITE_MISUSE which generates the "API called out of sequence" error.


--
D. Richard Hipp  <[EMAIL PROTECTED]>



-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] sqlite performance, locking & threading

I want to contribute my 0.02€ to this discussion. Basically I believe your 
(Emerson) design is flawed. I've been working for years with multithreaded and 
even multi-core systems. From my experience a design using threads for specific 
tasks is *always* performing better, than having multiple threads execute the 
same things in parallel. This experience is based on the facts about lock 
contention, thread starvation, caching and more. These always apply unless you 
have a realtime operating system, which guarantees you that every thread is 
scheduled and the scheduling order. On most systems this ain't the case AFAIK. 
At least not on Windows/Linux, if there's heavy threading and lock contention 
going on.

I really would suggest you to test out a design, where threads are assigned 
single tasks, such as retrieving data from db, writing data to db and queues, 
which provide these threads with work items. If you're really smart, you'll 
create queues using atomic operations so that no thread will take locks for 
these operations. 

To summarize my points:

- Create one Sqlite writer thread, one sqlite reader thread.
- Create queues for all worker threads, which provide them with work items.
- Do *not* take kernel locks on data structures or libraries, this *will* 
really hurt your performance.
- Use transactions coarse grained on the Sqlite writer thread. Either decide 
inside the thread, when to commit and start a new one or design a specific 
workitem to trigger this from the outside. This depends on having a consistent 
state in your data structures/the database.

But: Your limiting factor will always be the hard disk. Analyze your tasks to 
determine, what the bounding factor is: Is it the CPU? Is it the disk? Is it 
the network? Only then start changing something. Multithreading only makes 
sense if you can parallelize heavily and are not bound by disk/network, but by 
CPU and have multiple (unused) cores available.

And: Having multiple statements executing concurrently is only possible with 
multiple Sqlite connections. A connection can only keep one resultset open or a 
statement executing (unless that has changed recently.) 

Mike

-Ursprüngliche Nachricht-
Von: Emerson Clarke [mailto:[EMAIL PROTECTED] 
Gesendet: Samstag, 30. Dezember 2006 17:08
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] sqlite performance, locking & threading

Richard,

Are you sure we are not just getting into semantic knots here ?

Do we have the same definition of "at the same time".  I mean concurrently, so 
that both threads use the same sqlite3 * structure, within mutexes. Each query 
is allowed to complete before the other one starts, but each thread may have 
multiple statements or result sets open.

When i try to do this, i get api called out of sequence errors...

On 12/30/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> "Emerson Clarke" <[EMAIL PROTECTED]> wrote:
> > But why then can i not have a single transaction wrapping a single 
> > connection which is used within multiple threads, obvioulsy not at 
> > the same time.
>
> You can.  What makes you think you can't?
> --
> D. Richard Hipp  <[EMAIL PROTECTED]>
>
>
> --
> --- To unsubscribe, send email to 
> [EMAIL PROTECTED]
> --
> ---
>
>

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: AW: [sqlite] for what reason :memory: is much slower than /dev/sh m/dummy.db





Hello Eduardo,

thank you for the hints given. Please can you tell me how to disable
journaling? In our project it is not important to have the database
persistent. We create the database, work with it, and destroy it within one
batch run. So we dont have to save it to disk ;-).

thank you so far
roland


You must tweak sqlite code, perhaps it can be a very bad tweak and 
can damage your data or simple don't work. It can be done 2 ways,


a) Go to file main.c search for "sqlite3BtreeFactory" and modify the 
second time you find it. The original line says
 rc = sqlite3BtreeFactory(db, zFilename, 0, MAX_PAGES, 
>aDb[0].pBt); and change it to
 rc = sqlite3BtreeFactory(db, zFilename,BTREE_OMIT_JOURNAL, 
MAX_PAGES, >aDb[0].pBt);


b) On main.c change sqlite3BtreeFactory function lines
if( omitJournal ){
btree_flags |= BTREE_OMIT_JOURNAL;
  }

to (just add comments)

//if( omitJournal ){
btree_flags |= BTREE_OMIT_JOURNAL;
//  }

This way you always open databases and virtual databases without 
jounaling. You can't use temporal files or databases or tables, you 
must use memory for temporal work.


Again, if you don't know what are you doing or what does it do, don't do.

It will be better if pragma or compiler or open_file options can be 
added to OpenDatabase function to manage this.


HTH

P.S. Perhaps we did other changes or have compile options that allow 
this tweak to work.

P.P.S. If you can, share any other tricks/tweaks for memory databases you find.
P.P.P.S. And last, this changes are for embedded hardware and o.s. 
and shouldn't be used for regular computers.




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] for what reason :memory: is much slower than /dev/sh m/dummy.db



> -Ursprüngliche Nachricht-
> Von: Eduardo Morras [mailto:[EMAIL PROTECTED]
> Gesendet: Freitag, 1. Dezember 2006 19:44
> An: sqlite-users@sqlite.org
> Betreff: Re: [sqlite] for what reason :memory: is much slower than
> /dev/shm/dummy.db
> 
> 
> At 09:34 01/12/2006, you wrote:
> >Hi there,
> >
> >we are on an challanging project with very high requirements 
> on performance.
> >When doing some debugging we discover, that the sqlite 
> method for creating
> >an memory-based database is much slower than using e.g 
> /dev/shm on linux or
> >/tempfs on solaris. (We have measured an 20min performance 
> advantage for the
> >/dev/shm style on a batch run which takes 70min with 
> :memory: and just 49min
> >using /dev/shm.
> >Because our project needs to be ported to windows - the 
> /dev/shm is not an
> >option - because win2000 does not support any temporary 
> memory based file
> >system. But beside that, we guess, that there will be a 
> possiblity to tune
> >:memory: or we belief, that we to something wrong when using 
> :memory: (for
> >example pragma page_size ...).
> >Is there any body who can give us some advises to tune up 
> our :memory:
> >database to become as fast as the /dev/shm alternativ?
> >
> >Thanks
> >roland
> 
> On our project we desisted to use :memory: databases, only a ram disk 
> file system. From time to time make a snapshot to hard disk or other 
> persistent medium.
> 
> In windows i suppouse you can make a ram disk using malloc and copy 
> there your database file, set the pragma for temporary files to 
> memory and disable journaling. Make a new io routines access based on 
> windows, open/close, write/read etc... for access your memory malloc 
> ram disk. Again, from time to time stop reads/writes to database and 
> save it to disk.
> 
> HTH
> 
Hello Eduardo,

thank you for the hints given. Please can you tell me how to disable
journaling? In our project it is not important to have the database
persistent. We create the database, work with it, and destroy it within one
batch run. So we dont have to save it to disk ;-).

thank you so far
roland

> 
> --
> --
> ---
> Scientists have shown that the moon is moving away at a tiny yet 
> measurable distance from the earth every year.
>   If you do the math, you can calculate that 85 million years ago the 
> moon was orbiting the earth at a distance of
>   about 35 feet from the earth's surface. This would explain the 
> death of the dinosaurs. The tallest ones, anyway. 
> 
> 
> --
> ---
> To unsubscribe, send email to [EMAIL PROTECTED]
> --
> ---
> 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] for what reason :memory: is much slower than /dev/s hm/dummy.db



> -Ursprüngliche Nachricht-
> Von: John Stanton [mailto:[EMAIL PROTECTED]
> Gesendet: Freitag, 1. Dezember 2006 19:59
> An: sqlite-users@sqlite.org
> Betreff: Re: [sqlite] for what reason :memory: is much slower than
> /dev/shm/dummy.db
> 
> 
> Eduardo Morras wrote:
> > At 09:34 01/12/2006, you wrote:
> > 
> >> Hi there,
> >>
> >> we are on an challanging project with very high requirements on 
> >> performance.
> >> When doing some debugging we discover, that the sqlite method for 
> >> creating
> >> an memory-based database is much slower than using e.g /dev/shm on 
> >> linux or
> >> /tempfs on solaris. (We have measured an 20min performance 
> advantage 
> >> for the
> >> /dev/shm style on a batch run which takes 70min with 
> :memory: and just 
> >> 49min
> >> using /dev/shm.
> >> Because our project needs to be ported to windows - the 
> /dev/shm is 
> >> not an
> >> option - because win2000 does not support any temporary 
> memory based file
> >> system. But beside that, we guess, that there will be a 
> possiblity to 
> >> tune
> >> :memory: or we belief, that we to something wrong when 
> using :memory: 
> >> (for
> >> example pragma page_size ...).
> >> Is there any body who can give us some advises to tune up 
> our :memory:
> >> database to become as fast as the /dev/shm alternativ?
> >>
> >> Thanks
> >> roland
> > 
> > 
> > On our project we desisted to use :memory: databases, only 
> a ram disk 
> > file system. From time to time make a snapshot to hard disk 
> or other 
> > persistent medium.
> > 
> > In windows i suppouse you can make a ram disk using malloc and copy 
> > there your database file, set the pragma for temporary 
> files to memory 
> > and disable journaling. Make a new io routines access based 
> on windows, 
> > open/close, write/read etc... for access your memory malloc 
> ram disk. 
> > Again, from time to time stop reads/writes to database and 
> save it to disk.
> > 
> > HTH
> > 
> > 
> You might find you can get the same performance in a simpler 
> way by just 
> disabling synchronous writes.  Read Dr Hipp's explanation of a memory 
> database for the reason.
Hi,

synchronous writes already disabled. The performance difference occours
without synch-writes.

thanks
roland
> 
> --
> ---
> To unsubscribe, send email to [EMAIL PROTECTED]
> --
> ---
> 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] for what reason :memory: is much slower than /dev/sh m/dummy.db

> -Ursprüngliche Nachricht-
> Von: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Gesendet: Freitag, 1. Dezember 2006 18:06
> An: sqlite-users@sqlite.org
> Betreff: Re: [sqlite] for what reason :memory: is much slower than
> /dev/shm/dummy.db
> 
> 
> John Stanton <[EMAIL PROTECTED]> wrote:
> > 
> > I understand that Sqlite uses a less efficient algorithm 
> for :memory 
> > files than for the cacheing on regular files and that means lower 
> > performance on those memory DBs.
> > 
> 
> It uses exactly the same algorithm.  It is just that the hash
> tables are fixed sized and are designed for a cache, not for
> an entire database.

Does this mean, that if i use :memory: databases, the cache-layer of sqlite
is disabled?

thanks in advance
roland

> 
> --
> D. Richard Hipp  <[EMAIL PROTECTED]>
> 
> 
> --
> ---
> To unsubscribe, send email to [EMAIL PROTECTED]
> --
> ---
> 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] SQLite and McAfee Anti-Virus

I would actually remove the default or use the process name instead.

Just my $0.02.
Mike 

-Ursprüngliche Nachricht-
Von: John Stanton [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 1. November 2006 01:28
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] SQLite and McAfee Anti-Virus

Why not make it McAfee and direct the complaints to the culprits?

Joe Wilson wrote:
> Even better - name the temp file prefix msaccess_ and no one will dare 
> touch it or question it.
> 
> - Original Message 
> From: James Berry <[EMAIL PROTECTED]>
> To: sqlite-users@sqlite.org
> Sent: Tuesday, October 31, 2006 6:14:11 PM
> Subject: Re: [sqlite] SQLite and McAfee Anti-Virus
> 
> On Oct 31, 2006, at 2:24 PM, [EMAIL PROTECTED] wrote:
> 
>>  only people in
>>the know will figure out 'sqlite' is 'etilqs' backwards and, as you 
>>point out,
> 
> 
> You're assuming that Google won't find this thread, which is now 
> public record. Soon a search for (that new word) will get back to 
> sqlite anyway... ;)
> 
> -jdb
> 
> 
> 
> 
> 
> 
> --
> --- To unsubscribe, send email to 
> [EMAIL PROTECTED]
> --
> ---
> 



-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] PK and rowid

uniqId is the same as rowid, 
so you will get the same execution plans for ...where rowid=x and  ... where 
uniqId=x.


- Ursprüngliche Mail 
Von: chetana bhargav <[EMAIL PROTECTED]>
An: sqlite-users@sqlite.org
Gesendet: Mittwoch, den 11. Oktober 2006, 09:19:28 Uhr
Betreff: [sqlite] PK and rowid


Hi,
   
  If I declare my column as,  "uniqId integer primary key", now if I say 
something like,
  select * from tbl1 where uniqId=x;
   
  Will the uniqId be same as rowid, making my table look ups faster as I am 
using row id only. If not whats the way to assign my uniqId to the row id so 
that my lookups would be faster.
   
   
  ...
  Chetana.


-
Do you Yahoo!?
Everyone is raving about the  all-new Yahoo! Mail.






___ 
Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: 
http://mail.yahoo.de

AW: [sqlite] Problem with close

Hi,

we have used sqlite3 with the Finisar.SQLite.NET wrapper in our applications 
too. And there were same unreproduceable problems.
After switching to the Mono.Data.SqliteClient.dll wrapper all problem 
disappears.
Try to use the Mono wapper, Finisar is buggy. :-(

Sylko


-Ursprüngliche Nachricht-
Von: Onnig Kouyoumdjian [mailto:[EMAIL PROTECTED] 
Gesendet: Donnerstag, 28. September 2006 23:47
An: sqlite-users@sqlite.org
Betreff: [sqlite] Problem with close

I'm using the sqlite3 dll with the Finisar.SQLite.NET wrapper in our
application written in C#. Currently I'm using version 3.3.7, but I have
been having the problem going back to version 3.3.4 which is when we
started using SQLite.  Most of the time everything works fine, but every
once in a blue moon (or so) when a request to close a connection is
made, SQLITE_ERROR is returned.  I've been able to determine that
SQLITE_ERROR is returned because magic happened to be SQLITE_BUSY when I
called close.  

One of my biggest problems is that I have not yet been able to reproduce
this problem on demand.  I've written a test program that simulates the
actual application but so far I can't make it fail on my development
environment.  

I have not come to any conclusions whether the problem is from my code,
the SQLite.NET wrapper or SQLite.  

One thing that doesn't make sense to me is why SQLite was BUSY when I
called close. I don't get an exception or busy before I call close,
since I would log such a thing and I don't see anything in the log until
the SQLITE_ERROR.

Once the error occurs, I can ignore it and continue to use the database
by opening a new connection and everything works fine.  My problem is
that once the error happens during a close, I'm guessing that a VM is
now hanging around somewhere and is not released.  This causes a problem
for me where at the end of a job that my application runs, I delete the
database because it is no longer needed, but I can't in this case
because the file is locked in some way.  

My application is a Windows service, so the only way that I have found
so far to clear it up is the restart the service.  

My application does use two threads for database calls, but the database
is called through a singleton class and I make sure that there is a lock
for all of the database calls including the open and close. So there
should never be any time where two threads would make a call to a
specific database.  I have two databases and it is possible that the two
threads call the different databases at the same time.

I am compiling the sqlite dll myself with VS.NET 2003 making sure it is
compiled with THREADSAFE=1.

I do have a theory that this problem shows up when the hard drive is
under a heavy load, but so far haven't been able to prove it. 

I will take suggestion on a fix or a Band-Aid.  The Band-Aid being: once
the job is done if I can find a way for the sqlite dll to release the
files, then I can delete them and all will be well.  

Thanks,

Onnig



-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: AW: [sqlite] Memory mapped db


Trevor Talbot wrote:

Michael is referring to a direct map from disk pages to memory pages
(OS notion of pages, not sqlite's), using something like mmap() on
unix or MapViewOfFile() on Windows.  This way memory is directly
backed by the file it refers to, instead of copying the data to
entirely new pages (possibly backed by swap).  It removes one level of
(redundant) cache.

The complications tend to come in when considering I/O control and
locking.  OS pages don't necessarily map to sqlite pages, so there can
be some "odd" boundaries there.  This would be most noticable when
trying to flush data to disk.  (The typical mmap abstraction requires
you force dirty OS pages to disk.  Interactions between file maps
(often copy-on-write) and underlying OS caches can be weird.).

You're also bounded by VM space when trying to map large files.  Most
mapping abstractions use "windows" intended to map several sequential
OS pages, and since sqlite randomly accesses pages, it would probably
be too much overhead when trying to handle files larger than the VM
space you're willing to sacrifice to the map.

In the general case I don't see it paying off, but in some specific
cases it could be a win.  I'd be interested to see experiments.

- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 



A well reasoned explanation.  We use mmap'ing with great success, but 
not for very large files or databases, for the reasons described above.


You definitely get a performance lift with mmap'd I/O largely because a 
level of buffer shadowing is removed.


On a B-Tree index access I measured a speed improvement of about 40% by 
changing from reads and local cacheing to mmap'd access.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: AW: [sqlite] Memory mapped db


Michael is referring to a direct map from disk pages to memory pages
(OS notion of pages, not sqlite's), using something like mmap() on
unix or MapViewOfFile() on Windows.  This way memory is directly
backed by the file it refers to, instead of copying the data to
entirely new pages (possibly backed by swap).  It removes one level of
(redundant) cache.

The complications tend to come in when considering I/O control and
locking.  OS pages don't necessarily map to sqlite pages, so there can
be some "odd" boundaries there.  This would be most noticable when
trying to flush data to disk.  (The typical mmap abstraction requires
you force dirty OS pages to disk.  Interactions between file maps
(often copy-on-write) and underlying OS caches can be weird.).

You're also bounded by VM space when trying to map large files.  Most
mapping abstractions use "windows" intended to map several sequential
OS pages, and since sqlite randomly accesses pages, it would probably
be too much overhead when trying to handle files larger than the VM
space you're willing to sacrifice to the map.

In the general case I don't see it paying off, but in some specific
cases it could be a win.  I'd be interested to see experiments.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: AW: [sqlite] Memory mapped db

On Thu, 28 Sep 2006 15:45:54 +0200, you wrote:

Hi Michael

>-Ursprüngliche Nachricht-
>Von: Jay Sprenkle [mailto:[EMAIL PROTECTED] 
>Gesendet: Donnerstag, 28. September 2006 15:37
>An: sqlite-users@sqlite.org
>Betreff: Re: [sqlite] Memory mapped db

>That's not really the same. I would have to copy the db into ram after
>opening it and since the db is too big to fit into the memory I would have
>to recreate it dependend on my selects. 
>If sqlite maps the db into memory, the operating system manages the mapping,
>sqlite "just" has to move it's view over the file.

I'm working with some "small" SQLite databases, with less than 150.000
Records (Extractions of a bigger CS-DB).  The records a stored in
several Tables. One Table with 4000-5000 Records are bigger, the rest
up to limit are smaller. I'm using SQlite to do some quickly jobs with
the data on a local machine. I read and load the whole Result, opened
by a SQL-Query, as a virtual table into a RAM's Vector-List. If this
done, I can read Record by Record, forward, backward, skip any
direction, alter, append new record, delete records. I can all do,
what I want

 It seems to me, I do that, who you ask. But... I don't know, which
developer-System you are using. I am working with VC++ .Net 2003.
If that wrong to you, excuse the disturbance   ;-)
Is it OK, spend some time at my HP.

Best Regards
Thomas

www.thlu.de

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] Memory mapped db



-Ursprüngliche Nachricht-
Von: Jay Sprenkle [mailto:[EMAIL PROTECTED] 
Gesendet: Donnerstag, 28. September 2006 15:37
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] Memory mapped db


>use the database named:memory:
>for a ram database. In a lot of cases it will be cached by
>the operating system so it ends up being that way anyway!

That's not really the same. I would have to copy the db into ram after
opening it and since the db is too big to fit into the memory I would have
to recreate it dependend on my selects. 
If sqlite maps the db into memory, the operating system manages the mapping,
sqlite "just" has to move it's view over the file.

 Michael




AW: AW: [sqlite] Performance question



-Ursprüngliche Nachricht-
Von: Martin Pfeifle [mailto:[EMAIL PROTECTED] 
Gesendet: Dienstag, 26. September 2006 13:35
An: sqlite-users@sqlite.org
Betreff: AW: AW: [sqlite] Performance question


>Hi Michael,
>could you please (re)post the exact create inex statements +primary key you
used. For speeding up 
>your query, you need an index on x only but not on id,x. Best Martin

The table looks like:
(blobsize between 100 and 8000 bytes, 25 rows in the table)

Create table t1 (x integer, y integer, flag integer, data blob)
Create index idx on t1 (x,y,flag)
(it doesn't matter if is inlcuded in the index)

Takes 5ms on my pda, 100 of those need 500ms:
Select data from t1 where x=v1 and y=v1 and flag=f  
Takes 7sec(!) on pda for a rectangle with 60 blobs:
Select data from t1 where (x between xlow and xhigh) and (y between ylow and
yhigh) and flag=f
Lightning fast:
Adding a column xy set to (x << 16|y) and replacing idex with an idx on xy:
Select x,y,data from t1 where xy in (xy1,xy2,...)

Cheers,
 Michael






AW: [sqlite] auxiliary threads in sqlite3

I can't answer the question regarding SQLite for you, but Windows does start
auxiliary threads in some APIs. 

-Ursprüngliche Nachricht-
Von: Dixon [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 27. September 2006 16:36
An: sqlite-users@sqlite.org
Betreff: [sqlite] auxiliary threads in sqlite3

I'm running with sqlite 3.3.6 in a windows environment.  I have NOT defined
"THREADSAFE=1".  My app is single threaded, however, multiple instances of
the app use sqlite3 on the same DB's, simultaneously.  My question -- Does
SQLite ever start auxiliary threads?


-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: AW: [sqlite] Performance question

Hi Michael,
could you please (re)post the exact create inex statements +primary key you 
used.
For speeding up your query, you need an index on x only but not on id,x.
Best Martin

- Ursprüngliche Mail 
Von: Michael Wohlwend <[EMAIL PROTECTED]>
An: "sqlite-users@sqlite.org" <sqlite-users@sqlite.org>
Gesendet: Dienstag, den 26. September 2006, 09:34:00 Uhr
Betreff: AW: [sqlite] Performance question


-Ursprüngliche Nachricht-
Von: Dennis Cote [mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 22. September 2006 17:07
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] Performance question


Michael Wohlwend wrote:

> But If I do "select data from pictures where (x between high_x and 
> low_x) and (y between high_y and low_y) then this takes ca. 8 seconds 
> (!) on wince.
>
>   

>>If you are really writing your between clauses as above with the high 
>>limit first, then they are not doing what you think. The low limit 
>>should always be given first.

Ah, that was a typo, of course the query was "between (low and high)". I
changed this to
"x > low and x <= high ..." and i got the same result: 1 single query
(without bouds-check) takes 5ms, the query with the bounds-check takes ca.
7seconds (there are indices on x and y).
I changed the query to (select  ... where id in (v1,v2,...)) this was quite
fast again, even if the list of values got over 200 elements, but that's not
the way I wanted to do it. Maybe sqlite on arm cpus in the current
implementation isn't optimized enough. But I have no idea where this huge
slowdown comes from.

Cheers
Michael






___ 
Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: 
http://mail.yahoo.de

AW: [sqlite] Performance question



-Ursprüngliche Nachricht-
Von: Dennis Cote [mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 22. September 2006 17:07
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] Performance question


Michael Wohlwend wrote:

> But If I do "select data from pictures where (x between high_x and 
> low_x) and (y between high_y and low_y) then this takes ca. 8 seconds 
> (!) on wince.
>
>   

>>If you are really writing your between clauses as above with the high 
>>limit first, then they are not doing what you think. The low limit 
>>should always be given first.

Ah, that was a typo, of course the query was "between (low and high)". I
changed this to
"x > low and x <= high ..." and i got the same result: 1 single query
(without bouds-check) takes 5ms, the query with the bounds-check takes ca.
7seconds (there are indices on x and y).
I changed the query to (select  ... where id in (v1,v2,...)) this was quite
fast again, even if the list of values got over 200 elements, but that's not
the way I wanted to do it. Maybe sqlite on arm cpus in the current
implementation isn't optimized enough. But I have no idea where this huge
slowdown comes from.

Cheers
 Michael


AW: AW: [sqlite] Re: Queries fail - I can't figure out why

Nevermind the issue. I've found my problem: I bound my string using character 
length instead of byte length for UTF-16.

Problem solved.

Mike

>Hi,
>
>Yes I showed an example query. The query I used for sqlite3_prepare
>is the following:
>
>SELECT * FROM Objects WHERE ObjectID = ?
>
>If I'd put quotes around the question mark, binding would have failed.
>
>Interestingly I've even had some queries fail in the SQLite shell 
>yesterday. Others worked. Looking at the EXPLAIN output the primary
>key index is used as it should - but somehow fails to locate the row,
>even though the where conditions are right and should return one row.
>
>The SQLite shell is the Windows EXE you can download on sqlite.org. I 
>compiled sqlite3.lib myself using Visual Studio 2005.
>
>Where does SQLite compare the condition with the index? I'd try to
>set a breakpoint and look further. Looking at the EXPLAIN output I 
>haven't figured that out yet, but it was late yesterday.
>
>Additionally I'll try to make a simple example program to find this 
>bug.
>
>Mike
>
>-Ursprüngliche Nachricht-
>Von: Igor Tandetnik [mailto:[EMAIL PROTECTED] 
>Gesendet: Sonntag, 24. September 2006 22:45
>An: SQLite
>Betreff: [sqlite] Re: Queries fail - I can't figure out why
>
>Michael Ruck 
>wrote:
>> I have the tables of the following style:
>>
>> CREATE TABLE Objects (ObjectID TEXT PRIMARY KEY, Class TEXT)
>>
>> And I'm executing the following statement in the sqlite3 shell:
>>
>> SELECT * FROM Objects WHERE ObjectID = 
>> '{08021C17-46DD-4d83-A6FE-DDF0F7EC0AAE}'
>>
>> In the shell this query succeeds. However if I try to do the same 
>> thing via sqlite3_prepare, sqlite3_bind_text16 and sqlite3_step, then 
>> sqlite3_step always returns 101 (SQLITE_DONE.)
>
>The query you show does not have any parameters - what are you using
>sqlite3_bind_text16 for? You are probably using a different query in your
>program - show it.
>
>As a wild guess, does your query look anything like this:
>
>SELECT * FROM Objects WHERE ObjectID = '?'
>
>(with question mark in quotes)? Note that '?' is a string literal consisting
>of one question mark character, not a parameter placeholder. 
>The correct parameterized query is
>
>SELECT * FROM Objects WHERE ObjectID = ?
>
>Igor Tandetnik 
>
>
>
>-
>To unsubscribe, send email to [EMAIL PROTECTED]
>
>-
>
>
>
>-
>To unsubscribe, send email to [EMAIL PROTECTED]
>-
>

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] Re: Queries fail - I can't figure out why

Hi,

Yes I showed an example query. The query I used for sqlite3_prepare
is the following:

SELECT * FROM Objects WHERE ObjectID = ?

If I'd put quotes around the question mark, binding would have failed.

Interestingly I've even had some queries fail in the SQLite shell 
yesterday. Others worked. Looking at the EXPLAIN output the primary
key index is used as it should - but somehow fails to locate the row,
even though the where conditions are right and should return one row.

The SQLite shell is the Windows EXE you can download on sqlite.org. I 
compiled sqlite3.lib myself using Visual Studio 2005.

Where does SQLite compare the condition with the index? I'd try to
set a breakpoint and look further. Looking at the EXPLAIN output I 
haven't figured that out yet, but it was late yesterday.

Additionally I'll try to make a simple example program to find this 
bug.

Mike

-Ursprüngliche Nachricht-
Von: Igor Tandetnik [mailto:[EMAIL PROTECTED] 
Gesendet: Sonntag, 24. September 2006 22:45
An: SQLite
Betreff: [sqlite] Re: Queries fail - I can't figure out why

Michael Ruck 
wrote:
> I have the tables of the following style:
>
> CREATE TABLE Objects (ObjectID TEXT PRIMARY KEY, Class TEXT)
>
> And I'm executing the following statement in the sqlite3 shell:
>
> SELECT * FROM Objects WHERE ObjectID = 
> '{08021C17-46DD-4d83-A6FE-DDF0F7EC0AAE}'
>
> In the shell this query succeeds. However if I try to do the same 
> thing via sqlite3_prepare, sqlite3_bind_text16 and sqlite3_step, then 
> sqlite3_step always returns 101 (SQLITE_DONE.)

The query you show does not have any parameters - what are you using
sqlite3_bind_text16 for? You are probably using a different query in your
program - show it.

As a wild guess, does your query look anything like this:

SELECT * FROM Objects WHERE ObjectID = '?'

(with question mark in quotes)? Note that '?' is a string literal consisting
of one question mark character, not a parameter placeholder. 
The correct parameterized query is

SELECT * FROM Objects WHERE ObjectID = ?

Igor Tandetnik 



-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] Performance question



-Ursprüngliche Nachricht-
Von: Gerald Dachs [mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 22. September 2006 11:28
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] Performance question



>My sql knowledge may be a little bit rusty and I have really no idea how
sqlite is doing "between" >querys. Anyway, once I have learned never to use
between because your query is equivalent to:

>where ((x >= high_x and x <= low_x) or (x >= low_x and x <= high_x) and  (y
>= high_y and y <= > low_y) or (y >= low_y and y <= high_y))

I think it is:
where (x >= low_x and x <= high_x) and (y >= low_y and y <= high_y), i.e. in
"between a and b", a should be lower or equal to b (you don't get a result
otherwise)

So it's without the "or" part. But I will test the other statement too.

>because of the or operators you will get a union of 4 selects. Maybe I am
wrong but I would expect >that "where x >= low_x and x <= high_x and y >=
low_y and y <= high_y" should be faster and all >what you need. You have
indices on x and y, haven't you?

Yep :-)


Cheers 
 Michael


AW: [sqlite] sqlite shared-cache mode usage

Hi everybody,

I am in the same situation, wondering if two or more processes can access
the database if one of them is in shared-cache mode, e.g. one process act as
in test_server.c serving multiple clients, another process reads/writes the
database through the ODBC driver of C. Werner.

Thanks in advance.

Marc Ruff
 
-Ursprüngliche Nachricht-
Von: Jay Sprenkle [mailto:[EMAIL PROTECTED] 
Gesendet: Sonntag, 27. August 2006 19:07
An: sqlite-users@sqlite.org
Betreff: [sqlite] sqlite shared-cache mode usage

Good morning all,

I'm in the design stage of a project and had a question about sqlite
shared-cache mode.
The new version of firefox will use mozStorage, which is based on
sqlite using shared-cache mode. I want other programs to be able
read/write to the database but I was told this might
be a problem. Are there any issues with two completely separate
processes accessing
the database when in this mode? This will be important if other apps
want to read or
manipulate the downloaded files list or the bookmarks.

Thanks!


-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] journal-off assert

hi,
i got the same error when I ported sqlite to an operating system using a 
proprietary file system.
The reason was that our file system did not support sparse files. i.e.
the fstat-command returned the wrong file-size. 
 
Maybe you should independently of SQLite try to 
* create a file,
* write data to it
* do an fseek over the end of the file
* write data to the file
and then determine the size with fstat.
In my case that was not working.
It was a bug of the underlying file system and not of SQLite...
best Martin


 
- Ursprüngliche Mail 
Von: weiyang wang <[EMAIL PROTECTED]>
An: sqlite-users@sqlite.org
Gesendet: Donnerstag, den 24. August 2006, 09:52:37 Uhr
Betreff: [sqlite] journal-off assert


hi,

i am trying to integrate sqlite into mobile platform, and porting layer,
like 'win_os.c' , have been created for that purpose.

the integrated sqlite works well on our mobile platform with the default
settings, but error come out when i try to tune the performance with 'pragma
cache_size = 1'.

what i have done is:

 /* create a new database*/
sqlite3_open("contacts.db", );

 /*tune the cache_size for m */
 sqlite3_exec(contactsdb,"pragma default_cache_size = 1" , callback, 0,
);

 /*create a table */
sqlite3_exec(contactsdb,"create table IF NOT EXISTS contacts (name
text,phone int)" , callback, 0, );

/*insert values into the table, totally 250 entries*/
   for ( i< 250)
  {
   sqlite3_exec(("insert xxx,xxx");
  }


the symptom is :

after the size of the database increased to 3K, the prgram can not insert
entries into the table anymore. the error returned is ' The database disk
image is malformed'

i have checked the porting layer i have made, the read, write, and offset
operations seems correct.

then, i recompile the sqlite codes with SQLITE_DEBUG and add
'  sqlite3_exec(contactsdb,"PRAGMA vdbe_trace=ON" , callback, 0,
);',
an assetion stalled the execution:   "assert( pPager->journalOff==jSz );" in
pager.c

the stacked vaules at the broken time is :
pPager->journalOff= 1544
jSz= 1543
file size of contacts.db-journal is 1543 (checked from windows)

i guess the problems are related to file offset, reading, and writing. or
related to bugs of SQlite, i am not quite sure.

so, is there anyone can help me out?


best regards,

yang

AW: [sqlite] what tier architecture?

>From your description I would consider this a one-tier architecture.
Ussually the tiers are defined as follows:

- Presentation

GUI, all user interaction.

- Business Logic

Logic, which can not be expressed by constraints in the database.
Interaction with other (software) systems and some more logic.

- Data (access) layer

This one is hard IMHO. In most cases this resembles a mapper, which maps
from relational tables to objects used in the business logic. Additionally
you have a set of predefined functions, which perform queries upon the data
sources and return the results as objects. In most cases these are only
access layers, which use a database server and stored procedures.

HTH,
Mike

-Ursprüngliche Nachricht-
Von: John Newby [mailto:[EMAIL PROTECTED] 
Gesendet: Montag, 7. August 2006 16:43
An: sqlite-users@sqlite.org
Betreff: [sqlite] what tier architecture?

Hi, this is probably a stupid question and it has nothing to do with SQlite
per se so I apologise in advance for hijacking the emails withmy question
plus I don't even know if this is the right place to ask the question.

I have created a GUI to SQLite using VB.Net for my University project and it
interfaces with the SQLite .dll file.

I have to now write about the architecture of my product but I am unsure as
what it would be, after reading this website I am even more confused :-

http://www.javaworld.com/javaworld/jw-01-2000/jw-01-ssj-tiers.html

Could it be one tier as both the DBMS and GUI are as one

or could it be two tier as the GUI could be the client and the DBMS the
server

or could it be three tier as the GUI could be the client and the DBMS the
server and database?

I'm really confused, so any help or information on this subject would be
really appreciated.

Many thanks

John



Re: RE: RE: AW: AW: [sqlite] New JDBC driver for SQLite


Brannon wrote:

It was just a warning.


Instructions for MSVC added to the README.


RE: RE: AW: AW: [sqlite] New JDBC driver for SQLite


> Thanks for the info. Points 2, 4 and 5 are covered by the 
> Makefile (DB.h is generated with javah), but I'll patch the 
> project for the variable declarations and the cast. Though I 
> have a feeling the cast is unncessesary, did VC throw an 
> error or warning for that?

It was just a warning.



Re: RE: AW: AW: [sqlite] New JDBC driver for SQLite


Brannon King wrote:

To compile the binary with VC71, I had to

1. move a dozen variable declarations to the top of the function
2. download the DB.h file separately from the build tree
3. change the jstrlen to end with "return (int)(s - str) - suppChars"
4. change my sqlite3 lib build to #define SQLITE_ENABLE_COLUMN_METADATA
5. reference the sqlite3 lib in a dll project containing the DB.c/h


Thanks for the info. Points 2, 4 and 5 are covered by the Makefile
(DB.h is generated with javah), but I'll patch the project for the
variable declarations and the cast. Though I have a feeling the cast
is unncessesary, did VC throw an error or warning for that?

I'd be interested in knowing if you've noticed a significant speed
improvement. The simplest test would be running test.Test from the
source tree.

d


RE: AW: AW: [sqlite] New JDBC driver for SQLite

To compile the binary with VC71, I had to

1. move a dozen variable declarations to the top of the function
2. download the DB.h file separately from the build tree
3. change the jstrlen to end with "return (int)(s - str) - suppChars"
4. change my sqlite3 lib build to #define SQLITE_ENABLE_COLUMN_METADATA
5. reference the sqlite3 lib in a dll project containing the DB.c/h

> -Original Message-
> From: David Crawshaw [mailto:[EMAIL PROTECTED] 
> Sent: Monday, July 31, 2006 2:03 AM
> To: Martin Pfeifle; sqlite-users@sqlite.org
> Subject: Re: AW: AW: [sqlite] New JDBC driver for SQLite
> 
> Martin Pfeifle wrote:
> > could you please shortly outline the differences between your jdbc 
> > driver and the one developed by Christian Werner?
> 
> I haven't looked too closely at the other driver, but from 
> what I have seen, it is designed to work with the old 
> callback exec() functions, so it supports SQLite 2, but is 
> probably slower.
> 
> I've gone for simplicity with this one. This driver has a lot 
> less code in it. There is a single .c file:
> 
> http://java.zentus.com/src/sqlitejdbc/src/org/sqlite/DB.java
> http://java.zentus.com/src/sqlitejdbc/src/org/sqlite/DB.c
> 
> which basically maps the C functions into Java. The JDBC 
> implementation just calls these functions as appropriate.
> 
> I also put a lot of effort into making binaries, so it is easy to use.
> Compiling JNI libraries for different platforms is a bit of a pain.
> 
> d
> 



Re: AW: AW: [sqlite] New JDBC driver for SQLite


Martin Pfeifle wrote:

could you please shortly outline the differences between your jdbc driver
and the one developed by Christian Werner?


I haven't looked too closely at the other driver, but from what I have
seen, it is designed to work with the old callback exec() functions,
so it supports SQLite 2, but is probably slower.

I've gone for simplicity with this one. This driver has a lot less
code in it. There is a single .c file:

http://java.zentus.com/src/sqlitejdbc/src/org/sqlite/DB.java
http://java.zentus.com/src/sqlitejdbc/src/org/sqlite/DB.c

which basically maps the C functions into Java. The JDBC
implementation just calls these functions as appropriate.

I also put a lot of effort into making binaries, so it is easy to use.
Compiling JNI libraries for different platforms is a bit of a pain.

d


AW: AW: [sqlite] New JDBC driver for SQLite

Hi David,
could you please shortly outline the differences between your jdbc driver and 
the one developed by Christian Werner?
Best Martin


- Ursprüngliche Mail 
Von: David Crawshaw <[EMAIL PROTECTED]>
An: sqlite-users@sqlite.org; Martin Pfeifle <[EMAIL PROTECTED]>
Gesendet: Sonntag, den 30. Juli 2006, 23:37:17 Uhr
Betreff: Re: AW: [sqlite] New JDBC driver for SQLite


Martin Pfeifle <[EMAIL PROTECTED]> wrote:
> Does the jdbc driver also provide the direct reading and writing of BLOBs?

Yes, through PreparedStatement.setBytes() and ResultSet.getBytes(). I
haven't added support yet for the java.sql.Blob type, because I am
funadmentally opposed to the idea of SQL calls generating heap garbage
and invoking the GC.

One thing I am trying to solve is how to get the size of a BLOB
without using RS.getBLOB().

d

Re: AW: [sqlite] New JDBC driver for SQLite


Martin Pfeifle <[EMAIL PROTECTED]> wrote:

Does the jdbc driver also provide the direct reading and writing of BLOBs?


Yes, through PreparedStatement.setBytes() and ResultSet.getBytes(). I
haven't added support yet for the java.sql.Blob type, because I am
funadmentally opposed to the idea of SQL calls generating heap garbage
and invoking the GC.

One thing I am trying to solve is how to get the size of a BLOB
without using RS.getBLOB().

d


AW: [sqlite] New JDBC driver for SQLite

Does the jdbc driver also provide the direct reading and writing of BLOBs?  
Best Matin


- Ursprüngliche Mail 
Von: David Crawshaw <[EMAIL PROTECTED]>
An: sqlite-users@sqlite.org
Gesendet: Sonntag, den 30. Juli 2006, 14:56:18 Uhr
Betreff: [sqlite] New JDBC driver for SQLite


Hello all,

I have written a new JDBC driver for SQLite.

http://java.zentus.com/sqlitejdbc.html

It is designed to be easy to read and as thin a layer as possible on
top of the C functions. The new _prepare()/_step() API is surprisingly
similar to the JDBC API, making it efficient code. SQLite is compiled
straight into the JNI binary, which keeps things simple for Java
programmers. There is only one dll/so/jnilib that needs to be added to
a project.

I've build binaries for Linux i386, Mac (universal) and Windows. Any
testing would be greatly appreciated, I don't have much hardware.

My few speed tests so far suggest it is fast. Code like:

statement.setAutoCommit(false);
for (int i=0; i < 1000; i++)
statement.executeUpdate("INSERT INTO...");
statement.commit();

runs in under 30ms on Linux. This means the JNI bridge is not causing
any serious overhead.

There are still a couple of features to add. I plan automatic Java
date/time handling and the code is not thread safe. These are not
difficult to do, I am just distracted by another optimisation right
now.

The code is under the BSD license. I hope someone finds a use for it.

d

AW: [sqlite] Major projects using SQLite

You can add Orb (www.orb.com) to the list.

Mike

-Ursprüngliche Nachricht-
Von: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 14. Juli 2006 17:04
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] Major projects using SQLite

=?iso-8859-1?Q?Jon_Garc=EDa_de_Salazar_Bilbao?= <[EMAIL PROTECTED]> wrote:
> 
> Could you give examples of some major software projects using SQLite?

* Apple OS-X
* Solaris 10
* Firefox
* Monotone
* SymbianOS (used in cellphones)
* YUM
* America Online (AOL)
* PHP
* Philips MP3 players
* POPFile
* Amarok
* Trac
* McAfee Anti-Virus
* SpamSieve
* DSPAM
* Songbird

--
D. Richard Hipp   <[EMAIL PROTECTED]>



AW: [sqlite] How do you find out the names of the fields within a table?

If you're using VB.NET 2002, then look at the following classes:

System.Collections.ArrayList
System.Collections.SortedList
System.Collections.Hashtable

These are all dynamic container classes.

Mike 

-Ursprüngliche Nachricht-
Von: John Newby [mailto:[EMAIL PROTECTED] 
Gesendet: Donnerstag, 13. Juli 2006 18:21
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] How do you find out the names of the fields within a
table?

Hi Fred, yes I am using vb.net.  I have searched online for vb.net dynamic
arrays, and one site I came accross states the new features of the
2.0framework allows for dynamic arrays but sadly I am using the
1.1 version.
http://www.ondotnet.com/pub/a/dotnet/2004/05/25/whidbey_vbnet_pt2.html
I am going to try and give it a go though.

Many thanks

John



On 13/07/06, Fred Williams <[EMAIL PROTECTED]> wrote:
>
> Did not realize he was using VB when I sent my last message.  I'd bet 
> VB still can't do anything dynamic.  That's only one of the reasons I 
> walked away from VB's "Daddy" (Quick Basic) years ago.
>
> > -Original Message-
> > From: Martin Jenkins [mailto:[EMAIL PROTECTED]
> > Sent: Thursday, July 13, 2006 8:08 AM
> > To: sqlite-users@sqlite.org
> > Subject: Re: [sqlite] How do you find out the names of the fields 
> > within a table?
> >
> >
> > John Newby wrote:
> > > Yeah I can get the names, but I need to put them in an
> > array, and to put
> > > them in an array I need to know the size of the array to
> > store them in,
> > > so I
> > > need to get a count first, then store this number as the
> > size of the array
> > > before I store the values into the array.
> >
> > Are you sure there no dynamic container objects in VB that support 
> > an "append" method? Lists?
> >
> > If not (and I find that hard to believe) you could hack around it by 
> > appending the names to a string, then parsing the string and then 
> > dimensioning your array, or you could build a linked list but ... 
> > surely VB has more intelligent containers than statically sized 
> > arrays?
> >
> > Martin
>
>



AW: [sqlite] How do you find out the names of the fields within a table?

If you're using VB6 your choices are VBs native Collection or the
Scripting.Dictionary class mentioned by Craig. Look them up in VBs online
help, there are examples on using them. I'd suggest get more familiar with
VB and its Container classes...

Mike 

-Ursprüngliche Nachricht-
Von: John Newby [mailto:[EMAIL PROTECTED] 
Gesendet: Donnerstag, 13. Juli 2006 18:10
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] How do you find out the names of the fields within a
table?

Hi Craig, thanks for your comments, but I can't seem to get that to work, it
says that add and count aren't part of the dictionaryBase or dictionaryEntry
(those being the only 2 dictionary things i could find in vb)

I think I'm nearly there with the array.

Thanks again for your help

John

On 13/07/06, Craig Morrison <[EMAIL PROTECTED]> wrote:
>
> John Newby wrote:
> > Hi Craig, I have looked into the dictionary that you suggested, but 
> > wouldn't this be the same as the array as I would still need to know 
> > how many
> times
> > to go through the while loop to add the names of the fields?  Also I 
> > need to get the names of the fields back out to populate a comboBox, 
> > which I already know how to do with an array.
>
> item = myDict("hash")
>
> Its pretty simple.
>
> Use your favorite search engine to look for "scripting.dictionary" and 
> you will find a wealth of information on VBScript's dictionary object.
>
> From an earlier message of yours:
>
> >' select all field names from the selected table
> >sqlite_cmd.CommandText = (strSQL)
> >' Now the SQLiteCommand object can give us a DataReader-Object:
> >sqlite_datareader = sqlite_cmd.ExecuteReader()
> >i = 0
> >While sqlite_datareader.Read()
> >'Try
> >fieldNames(i) = sqlite_datareader("name")
> >'atch es As Exception
> >'  MessageBox.Show(es.Message)
> >End Try
> >i += 1
> >End While
> >
> >sqlite_datareader.Close()
>
> You are already getting the field names on the fly, why do you insist on
>   knowing how many fields there are?
>
> After you populate the dictionary, you can get the count of items
> (fields) by doing:
>
> itemCount = myDict.Count
>
> To attempt to stave off any more questions:
>
> set fieldNames = CreateObject("Scripting.Dictionary")
> sqlite_cmd.CommandText = (strSQL)
> sqlite_datareader = sqlite_cmd.ExecuteReader() i = 0 While 
> sqlite_datareader.Read()
>  fieldNames.Add i, sqlite_datareader("name")
>  i += 1
> End While
> sqlite_datareader.Close()
> fieldCount = fieldNames.Count
>
> Your problem is *not* sqlite, but rather a command of VB itself. You 
> might want to spend some time getting up to speed on VB.
>
> Or better yet, why don't you just pass a reference to the list box and 
> add them directly to it?
>
> I'm not going to comment further on this topic as it is now QUITE far 
> off-topic.
>
> --
> Craig Morrison
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> http://pse.2cah.com
>Controlling pseudoephedrine purchases.
>
> http://www.mtsprofessional.com/
>A Win32 email server that works for You.
>



AW: [sqlite] How do you find out the names of the fields within a table?

VB always had dynamic containers. Starting with Arrays things such as ReDim
helped. Later Collection(s) (actually a Dictionary/Hashtable) were
introduced. In VB.NET you of course have all containers, which the .NET
framework supplies. In fact there are classes for Lists (ArrayList,
LinkedList and template/generic versions...)

Mike

-Ursprüngliche Nachricht-
Von: Fred Williams [mailto:[EMAIL PROTECTED] 
Gesendet: Donnerstag, 13. Juli 2006 18:03
An: sqlite-users@sqlite.org
Betreff: RE: [sqlite] How do you find out the names of the fields within a
table?

Did not realize he was using VB when I sent my last message.  I'd bet VB
still can't do anything dynamic.  That's only one of the reasons I walked
away from VB's "Daddy" (Quick Basic) years ago.

> -Original Message-
> From: Martin Jenkins [mailto:[EMAIL PROTECTED]
> Sent: Thursday, July 13, 2006 8:08 AM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] How do you find out the names of the fields 
> within a table?
>
>
> John Newby wrote:
> > Yeah I can get the names, but I need to put them in an
> array, and to put
> > them in an array I need to know the size of the array to
> store them in,
> > so I
> > need to get a count first, then store this number as the
> size of the array
> > before I store the values into the array.
>
> Are you sure there no dynamic container objects in VB that support an 
> "append" method? Lists?
>
> If not (and I find that hard to believe) you could hack around it by 
> appending the names to a string, then parsing the string and then 
> dimensioning your array, or you could build a linked list but ... 
> surely VB has more intelligent containers than statically sized 
> arrays?
>
> Martin



AW: [sqlite] Problems with Multi-Threaded Application.

Use a queue for the database operations in this case. You won't suffer from
lock or busy errors, if all access is serialized. Queues can scale very well
if done right.

Mike

-Ursprüngliche Nachricht-
Von: Gussimulator [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 12. Juli 2006 20:55
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] Problems with Multi-Threaded Application.

I cannot delay my thread.

Also, if I made a thread per event, then that wouldnt work either, since
every thread would open the database to try a write on it.

We are talking about 100 events in a second. This thread handles the queries
as it gets the events.





- Original Message - 
From: "Jay Sprenkle" <[EMAIL PROTECTED]>
To: 
Sent: Wednesday, July 12, 2006 2:57 PM
Subject: Re: [sqlite] Problems with Multi-Threaded Application.


> On 7/12/06, Gussimulator <[EMAIL PROTECTED]> wrote:
>>
>> About the "cant read while writing", how to avoid this?, I cant stop my
>> system while using the database. There would be no point on having a
>> database then.
>
> check the return code from operation, if it says Busy, or Locked,
> the redo the operation. You may need to retry several times
>
> // here's an example with a 1/2 second delay:
>sqlite3_busy_timeout(db, 500);
>bool locked = true;
>for ( int i = 0; ( i < 10 ) && ( locked ); i++ )
>  switch ( sqlite3_step() )
>{
>  case SQLITE_BUSY:
>  case SQLITE_LOCKED:
>break;
>  case SQLITE_ROW:
>  case SQLITE_DONE:
>locked = false;
>break;
>  default:
>throw Exception( "Cannot execute sql" );
>break;
>} 



  1   2   >