Re: [sqlite] plans for completion of INSERT OR REPLACE INTO?

2009-07-06 Thread James Gregurich

so you are suggesting that I put an INSERT in a C loop checking for a  
constraint violation failure. if I get one, I use errmsg to get the  
column  is not unique message and extract . Then, I issue a  
DELETE to clear out rows that match the value of . is that correct?





On Jul 6, 2009, at 3:53 AM, Simon Slavin wrote:

 (Sorry, hit 'Send' before I meant to.)

 On 6 Jul 2009, at 6:34am, James Gregurich wrote:

 a question for the sqlite developers.

 The inability of INSERT OR REPLACE to maintain referential  
 integrity
 leaves me with no mechanism to implement a feature in my project that
 I was intending to provide.  Are there any plans to add in the
 functionality for INSERT OR REPLACE to call delete triggers so that
 referential integrity can be maintained?

 It should not call DELETE triggers since it never deletes.  It should
 call either INSERT triggers or UPDATE triggers depending on which one
 it decides to do.

 In the meantime, you can do the same thing yourself: instead of
 calling INSERT OR REPLACE, test to see which one would happen then
 call either INSERT or UPDATE.  Or do the INSERT, allow it to fail if
 it will, then do the UPDATE.  The triggers on both of those
 combinations will work correctly.

 Simon.
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] plans for completion of INSERT OR REPLACE INTO?

2009-07-06 Thread James Gregurich

1) Why on earth would you want to scroll all the way to the bottom of  
a long email to get the response simply for the sake of We read  
English top to bottom.

2) This is going to be a challenge for me because I'm not writing a  
fixed DB with a known schema.  I'm writing a higher-level data store  
wrapper using SQLITE as the backing. The schema is configured at  
runtime.

3)  I found the following comment on the mysql dev site (relevant  
since replace into is inspired by mysql). They apparently fire the  
triggers as expected. It seems to me that your comments overcomplicate  
the requirements. If you are going to delete a row, call the delete  
triggersthat is all that is needed as best I can tell. However, I  
do admit that I am not well educated on SQL db engines. I'm figuring  
all this out as I go along. Feel free to tell me how absurdly wrong I  
am.


http://dev.mysql.com/doc/refman/5.0/en/replace.html

 MySQL uses the following algorithm for REPLACE (and LOAD DATA ...  
 REPLACE):

 Try to insert the new row into the table

 While the insertion fails because a duplicate-key error occurs for a  
 primary key or unique index:

 Delete from the table the conflicting row that has the duplicate key  
 value

 Try again to insert the new row into the table



and in the comments...


 If you are using REPLACE INTO... triggers are fired in this order  
 (if delete of duplcate key is used):
 - before insert
 - before delete
 - after delete
 - after insert





On Jul 6, 2009, at 1:15 PM, Simon Slavin wrote:

 Please quote previous text above your response to it.  We read English
 top to bottom.

 On 6 Jul 2009, at 8:22pm, James Gregurich wrote:

 On Jul 6, 2009, at 3:53 AM, Simon Slavin wrote:

 It should not call DELETE triggers since it never deletes.  It  
 should
 call either INSERT triggers or UPDATE triggers depending on which  
 one
 it decides to do.

 In the meantime, you can do the same thing yourself: instead of
 calling INSERT OR REPLACE, test to see which one would happen then
 call either INSERT or UPDATE.  The triggers on both of those
 combinations will work correctly.

 so you are suggesting that I put an INSERT in a C loop checking for a
 constraint violation failure.

 You have pointed out an error I made.  I was thinking that REPLACE
 meant that only one existing row could be replaced.  This is wrong:
 the new row can replace any number of existing rows.  Thank you for
 spotting my error.  The documentation points it out, not quite as
 clearly, in the REPLACE section of

 http://www.sqlite.org/lang_conflict.html

 So any proper trigger structure would have to call a combination of
 all three types of triggers: INSERT, DELETE and UPDATE.  I can't think
 of a good way to manage this properly.  And that may be why INSERT OR
 REPLACE itself doesn't use triggers correctly: it's too complicated to
 work out which of the existing rows is being REPLACED.  And you can't
 call DELETE triggers instead, because they're intended to stop
 something being deleted, and it might be okay to DELETE this thing if
 you're immediately going to INSERT another row that satisfies the
 requirement.

 I can only suggest that you handle the constraints testing yourself,
 in your code.  You will know, from the design of your database,
 whether your new row should be considered a replacement for an
 existing one.  You're going to have to replace your INSERT OR REPLACE
 with your own check for other constraints, and then deciding in your
 own code what commands to execute.  I don't think it will be possible
 to write code to do this properly for an arbitrary database.

  Or do the INSERT, allow it to fail if
 it will, then do the UPDATE.

 This previous suggestion of mine won't work at all, given that there's
 no way to tell which of the existing records you think you're  
 updating.

 I conclude that INSERT OR REPLACE isn't compatible with triggers, or
 that triggers have to support FOR EACH TRANSACTION as well as ROW and
 STATEMENT, or that INSERT OR REPLACE has to treat a primary key
 conflict differently to some other kind of conflict.  There are
 drawbacks to all three of these.  I can't think of a way to do it that
 suits SQLite's small/neat/obvious design criteria.

 Simon.
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] referential integrity and INSERT OR REPLACE

2009-07-06 Thread James Gregurich

based on the test I just ran, it reports the first one encountered only.

On Jul 6, 2009, at 2:53 PM, Nicolas Williams wrote:

 On Sat, Jul 04, 2009 at 10:24:50AM +0200, Kees Nuyt wrote:
 On Fri, 03 Jul 2009 14:38:43 -0700, James Gregurich
 bayouben...@mac.com wrote:


 nuts. that makes INSERT OR REPLACE worthless if you have tables
 dependent on one another.


 Is there any way to manually get a list of records for which there
 would be a conflict if a given record was inserted?

 BEGIN;
 INSERT the row, and observe any UNIQUE constraint failures.

 If there isn't one, the INSERT succeeded and you're done.

 Does SQLite3 report all those failuers in one attempt?  Or must one  
 loop
 to make this a robust conflict detection algorithm?
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] plans for completion of INSERT OR REPLACE INTO?

2009-07-06 Thread James Gregurich


On Jul 6, 2009, at 3:14 PM, Nicolas Williams wrote:

 On Mon, Jul 06, 2009 at 02:49:07PM -0700, James Gregurich wrote:
 1) Why on earth would you want to scroll all the way to the bottom of
 a long email to get the response simply for the sake of We read
 English top to bottom.

 Any quoted context must be read before the reply or else is not
 necessary and should not have been quoted.  Thus top posting (as
 the practice is known) is obnoxious because it causes the reader to
 read the reply, read the quoted text, and then re-read the reply to
 understand it in its proper context, and if the quoted text was not
 necessary in order to understand your reply (as in this particular
 case), then it just wastes bandwidth and storage.

 2) This is going to be a challenge for me because I'm not writing a
 fixed DB with a known schema.  I'm writing a higher-level data store
 wrapper using SQLITE as the backing. The schema is configured at
 runtime.

 I agree.  ISTM that SQLite3 ought to call either call DELETE triggers
 for all rows deleted by INSERT OR REPLACE, or UPDATE triggers for the
 row that had the same PRIMARY KEY and DELETE triggers for any other
 deleted rows.

 (To me INSERT OR REPLACE can equally be seen as deleting all  
 conflicting
 rows, then inserting a replacement row, or else as updating the row  
 that
 conflicts with the new row's PRIMARY KEY and deleting rows that  
 conflict
 on other unique indexes.  If the INSERT OR REPLACE has many input rows
 then this gets murkier in that which rows are updated or deleted may
 become non-deterministic, I think, so deleting all affected rows seems
 like the better solution.)

 3)  I found the following comment on the mysql dev site (relevant
 since replace into is inspired by mysql). They apparently fire the
 triggers as expected. It seems to me that your comments  
 overcomplicate
 the requirements. If you are going to delete a row, call the delete
 triggersthat is all that is needed as best I can tell. However, I
 do admit that I am not well educated on SQL db engines. I'm figuring
 all this out as I go along. Feel free to tell me how absurdly wrong I
 am.

 INSERT OR REPLACE is not a feature of the standard, so anything  
 goes...

 Nico
 -- 
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



How's this...

you have a pretty low threshold for obnoxious. Frankly, I lack the  
desire and energy needed to keep up with the list of rules people make  
up. read the email or don't. it doesn't matter to me. I don't make  
rules designed to get other people to adhere to my personal  
preferencesand it annoys me when I get lectured by others who  
expect me to follow some unwritten code of conduct just to suit their  
personal tastes. The adjective I'd use to counter obnoxious is  
uptight.





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] plans for completion of INSERT OR REPLACE INTO?

2009-07-05 Thread James Gregurich
a question for the sqlite developers.

The inability of INSERT OR REPLACE to maintain referential integrity  
leaves me with no mechanism to implement a feature in my project that  
I was intending to provide.  Are there any plans to add in the  
functionality for INSERT OR REPLACE to call delete triggers so that  
referential integrity can be maintained?

thanks,
James
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] referential integrity and INSERT OR REPLACE

2009-07-03 Thread James Gregurich

question:

How do I maintain referential integrity on a INSERT OR REPLACE given  
it does not call the delete trigger on the offending rows?

thanks,
james
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] referential integrity and INSERT OR REPLACE

2009-07-03 Thread James Gregurich

I read on another posting in the archives that it does not. However, I  
haven't tried it myself.

-James

 Simon Slavin
 Fri, 03 Jul 2009 09:44:22 -0700

 On 3 Jul 2009, at 3:28am, James Gregurich wrote:

  How do I maintain referential integrity on a INSERT OR REPLACE given
  it does not call the delete trigger on the offending rows?

 If SQLite decides that it's going to do a REPLACE rather than an
 INSERT, does it call the triggers for UPDATE ?  I think that would be
 a good way for it to work.

 Simon.
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] referential integrity and INSERT OR REPLACE

2009-07-03 Thread James Gregurich

nuts. that makes INSERT OR REPLACE worthless if you have tables  
dependent on one another.


Is there any way to manually get a list of records for which there  
would be a conflict if a given record was inserted?


 On Fri, 03 Jul 2009 11:29:14 -0700, James Gregurich
 bayouben...@mac.com wrote:

 
 based on my reading of the docs for INSERT OR REPLACE, it will delete
 rows for ANY constraint violation, not just one involving the primary
 key. Is that reading wrong?

 You are right, for UNIQUE constraint violations.

 Indeed it breaks referential integrity, because the new row
 can contain a new value for the primary key, without
 cascading deletes or updates concerning the old value.

 My conclusion: syntax shortcuts like INSERT OR REPLACE are
 evil ;)

 -James
 -- 
   (  Kees Nuyt
   )
 c[_]

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] getting offending constraint

2009-07-02 Thread James Gregurich

I dropped the constraint and added the trigger.


strange. works as expected in the sqlite3 exe.   in C code, I get  
constraint failed from sqlite3_errmsg. If I drop the trigger shown  
below, the C code has no constraint violation as would be expected  
which means the trigger is causing the constraint violation.



sqlite .dump
BEGIN TRANSACTION;
CREATE TABLE test1(i INTEGER, a INTEGER, b INTEGER, c INTEGER, d  
INTEGER);
INSERT INTO test1 VALUES(0,1,2,3,4);
INSERT INTO test1 VALUES(1,10,20,30,40);
INSERT INTO test1 VALUES(2,100,200,300,400);
INSERT INTO test1 VALUES(3,1000,2000,3000,4000);
INSERT INTO test1 VALUES(4,1,2,3,4);
INSERT INTO test1 VALUES(5,10,20,30,40);
CREATE TABLE test1a(i INTEGER, a INTEGER, b INTEGER);
CREATE TABLE test1b(i INTEGER, c INTEGER, d INTEGER);
CREATE TRIGGER trig BEFORE INSERT ON test1b
BEGIN
SELECT CASE
WHEN (1)
THEN RAISE(ABORT, 'no parent element')
END;
END;
COMMIT;
sqlite INSERT INTO test1b VALUES(1,10,20);
SQL error: no parent element
sqlite










On Jul 1, 2009, at 6:40 PM, Simon Slavin wrote:


 On 2 Jul 2009, at 1:57am, James Gregurich wrote:

 I tried that, but I still got back constraint failed rather than my
 RAISE message. Since you say it should work, I probably did something
 wrong. I'll look at it again.

 If you left the constraint definition in in your table definition then
 you're getting an error from that rather than from the trigger.

 Simon.
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] getting offending constraint

2009-07-01 Thread James Gregurich


howdy!

Would there be a way to identify the offending constraint if  
SQLITE_CONSTRAINT is returned?


  sqlite3_errmsg is just telling me constraint failed...which is of  
limited usefulness.

-James

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] getting offending constraint

2009-07-01 Thread James Gregurich

How would I have hijacked a thread?  I changed the subject and  
removed the original text.





On Jul 1, 2009, at 12:32 PM, Roger Binns wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 James Gregurich wrote:

 howdy!

 You hijacked someone else's thread by hitting reply, rather than
 starting a new one.  That is very poor netiquette.

 Would there be a way to identify the offending constraint if
 SQLITE_CONSTRAINT is returned?

  sqlite3_errmsg is just telling me constraint failed...which is of
 limited usefulness.

 http://www.sqlite.org/cvstrac/tktview?tn=1648

 Roger
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (GNU/Linux)

 iEYEARECAAYFAkpLub0ACgkQmOOfHg372QTM5wCeO38HYFTMGi77aHcgtl1Y1xyK
 H3EAoJav+Q+pAq3LzpWnoMugx87ZnmrF
 =JN3m
 -END PGP SIGNATURE-
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] getting offending constraint

2009-07-01 Thread James Gregurich

ah. I have no knowledge of how mailing list programs work. no poor  
etiquette was intended.



On Jul 1, 2009, at 1:41 PM, P Kishor wrote:

 On Wed, Jul 1, 2009 at 3:39 PM, James Gregurichbayouben...@mac.com  
 wrote:

 How would I have hijacked a thread?  I changed the subject and
 removed the original text.

 ...

 that is exactly how a thread is hijacked... changing the subject is
 not enough. Every message has a unique id that is used by the mail
 programs to keep track of threading.
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] getting offending constraint

2009-07-01 Thread James Gregurich

thanks.

I tried that, but I still got back constraint failed rather than my  
RAISE message. Since you say it should work, I probably did something  
wrong. I'll look at it again.

On Jul 1, 2009, at 3:59 PM, Simon Slavin wrote:


 On 1 Jul 2009, at 8:19pm, James Gregurich wrote:

 Would there be a way to identify the offending constraint if
 SQLITE_CONSTRAINT is returned?


 sqlite3_errmsg is just telling me constraint failed...which is of
 limited usefulness.

 Instead of the constraint, you could define a trigger, and use the
 'RAISE' form to supply your own error message.  Here's an example:

 CREATE TRIGGER authors_books_insert BEFORE INSERT ON books
   FOR EACH ROW BEGIN
   SELECT RAISE(ROLLBACK, 'Attempt to add a book with an author
 number which is not valid.')
   WHERE (SELECT id FROM authors WHERE id = new.author) IS NULL;
   END

 You get back exactly the error message you put in.

 Simon.
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [noob] merge statement equivalent?

2009-06-19 Thread James Gregurich
thanks!


On Jun 18, 2009, at 6:01 PM, Dennis Cote wrote:

 James Gregurich wrote:
 on that update statement, is the SQL optimizer smart enough to not
 rerun that select statement for each column in the update's set
 clause? Is it going to run a single select statement to get value1,
 value2, etc.  or is it going to run one for each column in the
 update statement?


 James,

 No, I don't believe the optimizer is that smart. SQLite will execute
 multiple queries.

 If you are concerned that the matches table is large you could add an
 index on the the row1 column of the matches table to speed up the row2
 lookups. The lookups in table2 using the rowid should be very fast,  
 and
 once the page with the required record has been read into the cache  
 the
 subsequent value  lookup queries
 should execute very quickly as well.

 HTH
 Dennis Cote
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [noob] merge statement equivalent?

2009-06-17 Thread James Gregurich

Dennis,

question on an old post of yours below...


on that update statement, is the SQL optimizer smart enough to not  
rerun that select statement for each column in the update's set  
clause? Is it going to run a single select statement to get value1,  
value2, etc.  or is it going to run one for each column in the  
update statement?

-James

 Petite Abeille wrote:
 
  How does one emulate a DML MERGE statement in SQLite [1]?
 
  INSERT OR REPLACE sounds promising but the REPLACE documentation  
 under
  the ON CONFLICT clause seems to imply that in the case of a  
 constraint
  violation the existing row will be deleted entirely and then  
 replaced
  by a brand new row instead of being merely updated [2].
 
  Apologies if this is a FAQ, but my google-fu is eluding me on this  
 one.
 
  Thanks in advance.
 

 I haven't tested this so take it with a grain of salt, but I think  
 this
 should do the same thing as the merge statement.

 Given two tables, table1 and table2.

 merge into table1 using table2 on condition
  when matched then update
  set column1 = value1,
  column2 = value2 ...
  when not matched then insert columm1, column2 ...
  values (value1, value2 ...)

 Should be the same as the following series of SQL statements.

 create temp table matches as
  select t1.rowid as row1, t2.rowid as row2
  from table1
  join table2
  where condition

 insert into table1 (column1, column2 ...)
  select value1, value2 ... from table2
  where rowid not in (select row2 from matches);

 update table1
  set column1 = (select value1 from table2
  where table2.rowid =
  (select row2 from matches
  where row1 = rowid)),
  column2 = (select value2 from table2
  where table2.rowid =
  (select row2 from matches
  where row1 = rowid))
  ...
  where rowid in (select row1 from matches);

 drop table matches;


 HTH
 Dennis Cote
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [noob] merge statement equivalent?

2009-06-17 Thread James Gregurich

So what are you going to do? we need to get your plans pinned down.

On Jun 17, 2009, at 11:46 AM, James Gregurich wrote:


 Dennis,

 question on an old post of yours below...


 on that update statement, is the SQL optimizer smart enough to not
 rerun that select statement for each column in the update's set
 clause? Is it going to run a single select statement to get value1,
 value2, etc.  or is it going to run one for each column in the
 update statement?

 -James

 Petite Abeille wrote:

 How does one emulate a DML MERGE statement in SQLite [1]?

 INSERT OR REPLACE sounds promising but the REPLACE documentation
 under
 the ON CONFLICT clause seems to imply that in the case of a
 constraint
 violation the existing row will be deleted entirely and then
 replaced
 by a brand new row instead of being merely updated [2].

 Apologies if this is a FAQ, but my google-fu is eluding me on this
 one.

 Thanks in advance.


 I haven't tested this so take it with a grain of salt, but I think
 this
 should do the same thing as the merge statement.

 Given two tables, table1 and table2.

 merge into table1 using table2 on condition
 when matched then update
 set column1 = value1,
 column2 = value2 ...
 when not matched then insert columm1, column2 ...
 values (value1, value2 ...)

 Should be the same as the following series of SQL statements.

 create temp table matches as
 select t1.rowid as row1, t2.rowid as row2
 from table1
 join table2
 where condition

 insert into table1 (column1, column2 ...)
 select value1, value2 ... from table2
 where rowid not in (select row2 from matches);

 update table1
 set column1 = (select value1 from table2
 where table2.rowid =
 (select row2 from matches
 where row1 = rowid)),
 column2 = (select value2 from table2
 where table2.rowid =
 (select row2 from matches
 where row1 = rowid))
 ...
 where rowid in (select row1 from matches);

 drop table matches;


 HTH
 Dennis Cote
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [noob] merge statement equivalent?

2009-06-17 Thread James Gregurich

oops. sorry for errant message, folks. I had the wrong email selected  
when I hit the button and didn't pay attention to what I was doing.

On Jun 17, 2009, at 3:19 PM, James Gregurich wrote:


 So what are you going to do? we need to get your plans pinned down.

 On Jun 17, 2009, at 11:46 AM, James Gregurich wrote:


 Dennis,

 question on an old post of yours below...


 on that update statement, is the SQL optimizer smart enough to not
 rerun that select statement for each column in the update's set
 clause? Is it going to run a single select statement to get value1,
 value2, etc.  or is it going to run one for each column in the
 update statement?

 -James

 Petite Abeille wrote:

 How does one emulate a DML MERGE statement in SQLite [1]?

 INSERT OR REPLACE sounds promising but the REPLACE documentation
 under
 the ON CONFLICT clause seems to imply that in the case of a
 constraint
 violation the existing row will be deleted entirely and then
 replaced
 by a brand new row instead of being merely updated [2].

 Apologies if this is a FAQ, but my google-fu is eluding me on this
 one.

 Thanks in advance.


 I haven't tested this so take it with a grain of salt, but I think
 this
 should do the same thing as the merge statement.

 Given two tables, table1 and table2.

 merge into table1 using table2 on condition
when matched then update
set column1 = value1,
column2 = value2 ...
when not matched then insert columm1, column2 ...
values (value1, value2 ...)

 Should be the same as the following series of SQL statements.

 create temp table matches as
select t1.rowid as row1, t2.rowid as row2
from table1
join table2
where condition

 insert into table1 (column1, column2 ...)
select value1, value2 ... from table2
where rowid not in (select row2 from matches);

 update table1
set column1 = (select value1 from table2
where table2.rowid =
(select row2 from matches
where row1 = rowid)),
column2 = (select value2 from table2
where table2.rowid =
(select row2 from matches
where row1 = rowid))
...
where rowid in (select row1 from matches);

 drop table matches;


 HTH
 Dennis Cote
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] append table

2009-05-16 Thread James Gregurich
howdy!

Questions:


suppose tables t1  t2 exits in two separate db files, d1  d2  
respectively. t1  t2 have identical schemas, but different data.

I want to append t2 to t1.


I suppose the way to do that is to open a connect to d1, use the  
attach command to reference d2.t2 and issue a  INSERT INTO t1 SELECT  
* FROM d2.t2; query.


Is that correct?  Is there a better way?

Is this operation inefficient or pitfalls any pitfalls to watch out for?


-James





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] mutex and begin/end transaction

2009-05-01 Thread James Gregurich

So, you suggest I should build a commercial desktop application (for  
processing print-industry files and presenting them in a UI)  in such  
a way that it spawns multiple processes and communicates with them via  
the filesystem or IPC APIs?

Why would I want to go to that level of complexity in an  
uncontrollable environment (i.e. a consumer desktop computer) when I  
can just use NSOperation, boost::thread, and boost::mutex to build a  
single-process solution that shares data in a normal way between tasks?

James Gregurich
Engineering Manager
Markzware


On Apr 29, 2009, at 11:23 PM, Roger Binns wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 James Gregurich wrote:
 Given the industry is going multicore and 16-core macintoshes for  
 your
 grand-mother are  just a few years away, I recommend you rethink your
 position on the use of threading.

 Threading is the worst solution to many cpu/core and large memory.
 Having the same memory addressed (which is what threading does) across
 multiple concurrently executing cpus/cores causes cache thrashing,
 memory contention, frequent use of memory barriers for synchronization
 (which also slows things down) and as memory becomes attached to
 individual cpus leads to access being over hypertransport/csi.

 Far better is using multiple processes which don't have those  
 issues.  A
 good example application to study is Google Chrome which uses multiple
 processes prolifically - each tab is a separate process, as well as  
 the
 various plugins, languages etc.  Each process can be appropriately
 locked down using the principle of least privilege.  If you use  
 threads
 then typically they all have permission to do anything the process  
 could do.

 (Also the multi-process approach is way easier to test, record/replay
 and is more deterministic)

 Erlang is also worth studying.  It only has single threaded processes
 (although the processes are lighter weight than operating system  
 processes).

 NSOperation is a major part of that effort.

 If the single encapsulated task doesn't go around concurrently
 touching bits of memory then it could be shunted to a separate process
 anyway.

 The danger from threads is not the threads themselves, but the
 concurrency.  It is virtually impossible to prove that a threaded
 process does the concurrency correctly under all circumstances.  If  
 you
 eliminate the concurrency then you can use multiple processes, and can
 usually even make it scale over multiple machines!

 Roger
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (GNU/Linux)

 iEYEARECAAYFAkn5Q9MACgkQmOOfHg372QSS/QCfSje/tyX0hmidHyubVKqrXlHt
 Sq0AoKEbmcwx/fmAFtcVeMjbcUgN8dr3
 =8lUQ
 -END PGP SIGNATURE-
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] mutex and begin/end transaction

2009-05-01 Thread James Gregurich

I describe reality.

Someone has to be the arbiter of better. Generally, that arbiter is  
the guy handing out the research grants.

On May 1, 2009, at 5:33 AM, John Stanton wrote:


 Science is the Scientific Method - observation, hypothesis and
 skepticism.  The antithesis of politics.  There are no facts in  
 science,
 only observations and any hypothesis is only valid until a better one
 replaces it.

 You describe bad, politicized science.

 James Gregurich wrote:
 With all due respect,  science itself is a set of
 positions (opinions) which are endorsed by small group of people as
 official doctrine after appropriate study. Saying A 'position' is
 politics, not science is not a particularly meaningful statement.   
 If
 you want to argue that point, feel free to send me a private email.

 My threaded application works pretty darn well. I can process
 thousands of print industry files on an 8-core system keeping the
 cores busy without lagging the GUI for other applications. Just
 because many people create ill conceived programs doesn't mean
 threaded programs are inherently doomed to be ill-conceived. The
 development tools and techniques for building concurrent systems are
 advancing and making concurrency quite feasible.

 James Gregurich
 Engineering Manager
 Markzware

 On Apr 30, 2009, at 5:01 AM, John Stanton wrote:

 A position is politics, not science.  Warnings about the use of
 threads are based on science, and advise you to avoid them if  
 possible
 for your own protection.

 I see ill conceived programs using threads which go to complex
 synchronization to achieve the equivalent of single stream execution
 but
 with much greater overhead.  A KISS situation.

 James Gregurich wrote:
 thanks for the info. That should work for me.

 Given the industry is going multicore and 16-core macintoshes for
 your
 grand-mother are  just a few years away, I recommend you rethink  
 your
 position on the use of threading. Apple is heavily pushing
 parallelism
 on its developers.  NSOperation is a major part of that effort.  
 As I
 understand it, MS is developing their copy of NSOperation for  
 VS2010.
 The development landscape is only going to get more threaded as  
 time
 goes on.

 -James



 On Apr 29, 2009, at 10:03 PM, James Gregurich wrote:


 howdy!

 question:

 for an in-memory db with the threading mode set to serialized, is

 the

 internal mutex held for an entire transaction so that one thread

 won't

 access the db while another one is in the middle of a transaction

 with

 multiple insert statements?

 No.  But the mutex is recursive.  So you can get a copy of it  
 using
 sqlite3_db_mutex() then lock it yourself using
 sqlite3_mutex_enter()/
 leave().

 Also remember:  You should not be using threads.  Threads will  
 bring
 only grief and woe.  On your own head be it.



 D. Richard Hipp
 drh at hwaci.com


 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] mutex and begin/end transaction

2009-04-30 Thread James Gregurich

With all due respect,  science itself is a set of  
positions (opinions) which are endorsed by small group of people as  
official doctrine after appropriate study. Saying A 'position' is  
politics, not science is not a particularly meaningful statement.  If  
you want to argue that point, feel free to send me a private email.

My threaded application works pretty darn well. I can process  
thousands of print industry files on an 8-core system keeping the  
cores busy without lagging the GUI for other applications. Just  
because many people create ill conceived programs doesn't mean  
threaded programs are inherently doomed to be ill-conceived. The  
development tools and techniques for building concurrent systems are  
advancing and making concurrency quite feasible.

James Gregurich
Engineering Manager
Markzware

On Apr 30, 2009, at 5:01 AM, John Stanton wrote:

 A position is politics, not science.  Warnings about the use of
 threads are based on science, and advise you to avoid them if possible
 for your own protection.

 I see ill conceived programs using threads which go to complex
 synchronization to achieve the equivalent of single stream execution  
 but
 with much greater overhead.  A KISS situation.

 James Gregurich wrote:
 thanks for the info. That should work for me.

 Given the industry is going multicore and 16-core macintoshes for  
 your
 grand-mother are  just a few years away, I recommend you rethink your
 position on the use of threading. Apple is heavily pushing  
 parallelism
 on its developers.  NSOperation is a major part of that effort. As I
 understand it, MS is developing their copy of NSOperation for VS2010.
 The development landscape is only going to get more threaded as time
 goes on.

 -James



 On Apr 29, 2009, at 10:03 PM, James Gregurich wrote:


 howdy!

 question:

 for an in-memory db with the threading mode set to serialized, is

 the

 internal mutex held for an entire transaction so that one thread

 won't

 access the db while another one is in the middle of a transaction

 with

 multiple insert statements?

 No.  But the mutex is recursive.  So you can get a copy of it using
 sqlite3_db_mutex() then lock it yourself using  
 sqlite3_mutex_enter()/
 leave().

 Also remember:  You should not be using threads.  Threads will bring
 only grief and woe.  On your own head be it.



 D. Richard Hipp
 drh at hwaci.com


 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] mutex and begin/end transaction

2009-04-30 Thread James Gregurich

So, you suggest I should build a commercial desktop application (for  
processing print-industry files and presenting them in a UI)  in such  
a way that it spawns multiple processes and communicates with them via  
the filesystem or IPC APIs?

Why would I want to go to that level of complexity in an  
uncontrollable environment (i.e. a consumer desktop computer) when I  
can just use NSOperation, boost::thread, and boost::mutex to build a  
single-process solution that shares data in a normal way between tasks?

James Gregurich
Engineering Manager
Markzware


On Apr 29, 2009, at 11:23 PM, Roger Binns wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 James Gregurich wrote:
 Given the industry is going multicore and 16-core macintoshes for  
 your
 grand-mother are  just a few years away, I recommend you rethink your
 position on the use of threading.

 Threading is the worst solution to many cpu/core and large memory.
 Having the same memory addressed (which is what threading does) across
 multiple concurrently executing cpus/cores causes cache thrashing,
 memory contention, frequent use of memory barriers for synchronization
 (which also slows things down) and as memory becomes attached to
 individual cpus leads to access being over hypertransport/csi.

 Far better is using multiple processes which don't have those  
 issues.  A
 good example application to study is Google Chrome which uses multiple
 processes prolifically - each tab is a separate process, as well as  
 the
 various plugins, languages etc.  Each process can be appropriately
 locked down using the principle of least privilege.  If you use  
 threads
 then typically they all have permission to do anything the process  
 could do.

 (Also the multi-process approach is way easier to test, record/replay
 and is more deterministic)

 Erlang is also worth studying.  It only has single threaded processes
 (although the processes are lighter weight than operating system  
 processes).

 NSOperation is a major part of that effort.

 If the single encapsulated task doesn't go around concurrently
 touching bits of memory then it could be shunted to a separate process
 anyway.

 The danger from threads is not the threads themselves, but the
 concurrency.  It is virtually impossible to prove that a threaded
 process does the concurrency correctly under all circumstances.  If  
 you
 eliminate the concurrency then you can use multiple processes, and can
 usually even make it scale over multiple machines!

 Roger
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (GNU/Linux)

 iEYEARECAAYFAkn5Q9MACgkQmOOfHg372QSS/QCfSje/tyX0hmidHyubVKqrXlHt
 Sq0AoKEbmcwx/fmAFtcVeMjbcUgN8dr3
 =8lUQ
 -END PGP SIGNATURE-
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] mutex and begin/end transaction

2009-04-30 Thread James Gregurich

I can't agree that such a thing would be a good approach in a  
commercial desktop application environment. I'd never deploy something  
like that to millions of graphic designers. I want everything in a  
nice, tidy black-box that the average joe is incredibly unlikely to  
screw up. I have no idea what google chrome does and can't comment on  
it. I don't use the app.

Beyond that, I don't see how that approach solves the problem you  
point out. You still have concurrency going on with shared data  
structures. You still have to implement serialization on the shared  
data structures. The only thing you gain from that design over a  
threaded design is an extra degree of resiliency in that a crashed  
task won't bring down the app. On the downside, you have the extra  
hassle and complication of IPC.

The way I guard against a single task bringing the app down is that I  
religiously keep code exception safe, check for NULLs, and use  
shared_ptr's...I expect the same from my staff. Another way to guard  
the app is to minimize the use  of mutexes. Instead of blocking  
threads, you keep your tasks very small and focused, and you set up  
execution dependencies between tasks. Task B can be made not to run  
until task A is completed. Finally, the primary shared data structure  
is a SQLite in-memory store which is wrapped in C++ code that handles  
the dirty details of serializing transactions on the DB.


Handling the limited 32 bit VM space was indeed a challenge. I had to  
come up with a scheme to throttle the task queue once memory  
consumption reached a certain level.


As for the quality of staff members, that is always a challenge. All I  
can do about that is recruit and retain people who are talented and  
can write solid code.

-James


On Apr 30, 2009, at 4:37 PM, Roger Binns wrote:

 James Gregurich wrote:
 So, you suggest I should build a commercial desktop application (for
 processing print-industry files and presenting them in a UI)  in such
 a way that it spawns multiple processes and communicates with them  
 via
 the filesystem or IPC APIs?

 You obviously know more about your application, APIs, libraries etc  
 but
 it does sound like it would actually be a very good approach.  And of
 course you can also spawn processes on other machines too should the
 need arise.  The description sounds not too different than what Google
 Chrome does.

 Why would I want to go to that level of complexity in an
 uncontrollable environment (i.e. a consumer desktop computer) when I
 can just use NSOperation, boost::thread, and boost::mutex to build a
 single-process solution that shares data in a normal way between  
 tasks?

 Because while you are a perfect programming machine, not everyone else
 who will touch the code in the future is.  As an example if one mutex
 call is left out or the wrong acquired by programming accident, how  
 long
 would it take to know about it and fix it?

 If you have to run in a 32 bit address space then that also limits how
 much you can do in one process.  Do you even know how large the  
 maximum
 stack size is per thread and will other coders never exceed that?
 [Don't answer here - its your application, architecture and team :]

 Roger

 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] mutex and begin/end transaction

2009-04-29 Thread James Gregurich
howdy!

question:

for an in-memory db with the threading mode set to serialized, is the  
internal mutex held for an entire transaction so that one thread won't  
access the db while another one is in the middle of a transaction with  
multiple insert statements?


thanks for any info.

James Gregurich
Engineering Manager
Markzware

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] mutex and begin/end transaction

2009-04-29 Thread James Gregurich

thanks for the info. That should work for me.

Given the industry is going multicore and 16-core macintoshes for your  
grand-mother are  just a few years away, I recommend you rethink your  
position on the use of threading. Apple is heavily pushing parallelism  
on its developers.  NSOperation is a major part of that effort. As I  
understand it, MS is developing their copy of NSOperation for VS2010.  
The development landscape is only going to get more threaded as time  
goes on.

-James


 On Apr 29, 2009, at 10:03 PM, James Gregurich wrote:

  howdy!
 
  question:
 
  for an in-memory db with the threading mode set to serialized, is  
 the
  internal mutex held for an entire transaction so that one thread  
 won't
  access the db while another one is in the middle of a transaction  
 with
  multiple insert statements?


 No.  But the mutex is recursive.  So you can get a copy of it using
 sqlite3_db_mutex() then lock it yourself using sqlite3_mutex_enter()/
 leave().

 Also remember:  You should not be using threads.  Threads will bring
 only grief and woe.  On your own head be it.



 D. Richard Hipp
 drh at hwaci.com

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple writers for in-memory datastore

2008-04-21 Thread James Gregurich

interesting. thanks for the tip.

Ill give it some consideration.

-James


On Apr 21, 2008, at 1:07 :50PM, Scott Hess wrote:

 If you create a file on disk and set PRAGMA synchronous = OFF, you
 should get pretty close to the performance of a shared in-memory
 database on most modern desktop operating systems - maybe close enough
 that you won't care to do anything beyond that.  If you further look
 at the recent discussion/patch to disable journaling entirely, you
 should get even closer.  Going this route means you won't have to
 worry so much about the case where someone accidentally pumps 4 gig of
 data into your database and sucks up all RAM.

 Keep in mind that if you do these things, then it is quite trivial to
 generate corrupt database files if your app or OS crashes.  So you
 need to arrange to delete database files on app start-up to reset your
 state (an in-memory database wouldn't have that problem!).  On a
 Unix-based system, you may be able to open the database then delete
 the underlying path, but that may not work for however you are sharing
 things.

 -scott

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple writers for in-memory datastore

2008-04-20 Thread James Gregurich

for those who may be interested:

I ran a test with SQLite version: 3.5.8


I tried the scheme described earlier with each thread sharing a  
connection but writing into its own attached in-memory db on that  
connection.   Didn't work. all but the first writer thread failed with  
a SQLITE_ERROR

oh well.

I think I will go with CoreData on MacOSX and figure out something  
else to do on Windows later.


my thanks to all who attempted to provide a solution.


-James
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple writers for in-memory datastore

2008-04-20 Thread James Gregurich

yes. However, CoreData queues up modified managed objects in a managed  
object context and then commits them all in one shot making sure the  
serialization is done on the back side.

So, it does basically what someone here recommended earlier. I just  
don't have to write the mechanism myself.


Actually, CoreData is what I intended to use at first. However, I have  
explored the possibility of directly using SQLite instead to keep my  
document readers and their data management cross-platform.

On Apr 20, 2008, at 8:31 AM, Dennis Cote wrote:

 James Gregurich wrote:
 I think I will go with CoreData on MacOSX and figure out something
 else to do on Windows later.



 You do know that CoreData uses SQLite for its persistant storage.

 Dennis Cote
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple writers for in-memory datastore

2008-04-19 Thread James Gregurich

oh good! That isn't the version that ships with Leopard, but I can  
live with deploying my own version as part of my app.

Will l get the writer parallelism I'm after as long as each thread  
writes exclusively into its own attached db?


in other wordstwo bulk insert operations going on simultaneously  
on the same connection but each insert operation going into a  
different attached in-memory db.


On Apr 19, 2008, at 9:20 AM, Dan wrote:


 On Apr 19, 2008, at 6:06 AM, James Gregurich wrote:


 I'll ask this question. The answer is probably no, but I'll ask it
 for the sake of completeness.


 Suppose I created an in-memory db. I use the attach command to
 associate an additional in-memory db. Suppose I assign the main db to
 thread 1 and the associated db to thread 2. Can I share the  
 connection
 across the 2 threads if each thread works exclusively in its own db?

 I am aware that the connection is generally not threadsafe, but will
 it work if the two threads don't operate on the same db at the same
 time?

 As of 3.5, sqlite connections are threadsafe by default. With
 earlier versions, this trick will not work.

 Dan.


 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple writers for in-memory datastore

2008-04-19 Thread James Gregurich

I don't immediately see how that would solve the problem.

The limitation of interest here (based on my perhaps limited  
understanding) is that locking has file-level granularity. I don't  
immediately see how a VST implementation would allow for changing the  
locking granularity of the overall system.

-James

On Apr 19, 2008, at 12:03 PM, Virgilio Fornazin wrote:

 what about creating a VFS for such task ? Can be accomplished in  
 many ways,
 using heap memory, shared memory... not so easy to do, but not much
 complicated too... locking can be provided by multiple-readers
 single-writers locks strategies, etc...

 On Sat, Apr 19, 2008 at 2:29 PM, James Gregurich [EMAIL PROTECTED]
 wrote:


 oh good! That isn't the version that ships with Leopard, but I can
 live with deploying my own version as part of my app.

 Will l get the writer parallelism I'm after as long as each thread
 writes exclusively into its own attached db?


 in other wordstwo bulk insert operations going on simultaneously
 on the same connection but each insert operation going into a
 different attached in-memory db.


 On Apr 19, 2008, at 9:20 AM, Dan wrote:


 On Apr 19, 2008, at 6:06 AM, James Gregurich wrote:


 I'll ask this question. The answer is probably no, but I'll ask  
 it
 for the sake of completeness.


 Suppose I created an in-memory db. I use the attach command to
 associate an additional in-memory db. Suppose I assign the main  
 db to
 thread 1 and the associated db to thread 2. Can I share the
 connection
 across the 2 threads if each thread works exclusively in its own  
 db?

 I am aware that the connection is generally not threadsafe, but  
 will
 it work if the two threads don't operate on the same db at the same
 time?

 As of 3.5, sqlite connections are threadsafe by default. With
 earlier versions, this trick will not work.

 Dan.


 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple writers for in-memory datastore

2008-04-19 Thread James Gregurich

ok. I think I see what you are suggesting. You want to provide a  
mechanism to create memory files that SQLite thinks are disk  
filesthat way you could create multiple connections to them.

an interesting idea. However, that may be more effort that I can  
justify at this point. I'll think about it.


yes. I am fluent in STL.


On Apr 19, 2008, at 1:19 PM, Virgilio Alexandre Fornazin wrote:

 Imagine the following cenario (I assume you know c++ stdlib)

 A map of strings (filenames) to in-memory file handlers (the objects  
 that
 will handle the shared memory or heap files).

 These files handlers will exists until the process exists and do not  
 receive
 a delelefile() vfs call.

 File handlers can synchronize RW-Locks using internal mutex/criticat
 sections/semaphores/spin locks, etc.

 When you create a new file in vfs, a new handler is created and  
 assigned to
 that filename and registered in this map.






 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of James Gregurich
 Sent: sábado, 19 de abril de 2008 17:02
 To: General Discussion of SQLite Database
 Subject: Re: [sqlite] multiple writers for in-memory datastore


 I don't immediately see how that would solve the problem.

 The limitation of interest here (based on my perhaps limited
 understanding) is that locking has file-level granularity. I don't
 immediately see how a VST implementation would allow for changing the
 locking granularity of the overall system.

 -James

 On Apr 19, 2008, at 12:03 PM, Virgilio Fornazin wrote:

 what about creating a VFS for such task ? Can be accomplished in
 many ways,
 using heap memory, shared memory... not so easy to do, but not much
 complicated too... locking can be provided by multiple-readers
 single-writers locks strategies, etc...

 On Sat, Apr 19, 2008 at 2:29 PM, James Gregurich  
 [EMAIL PROTECTED]
 wrote:


 oh good! That isn't the version that ships with Leopard, but I can
 live with deploying my own version as part of my app.

 Will l get the writer parallelism I'm after as long as each thread
 writes exclusively into its own attached db?


 in other wordstwo bulk insert operations going on simultaneously
 on the same connection but each insert operation going into a
 different attached in-memory db.


 On Apr 19, 2008, at 9:20 AM, Dan wrote:


 On Apr 19, 2008, at 6:06 AM, James Gregurich wrote:


 I'll ask this question. The answer is probably no, but I'll ask
 it
 for the sake of completeness.


 Suppose I created an in-memory db. I use the attach command to
 associate an additional in-memory db. Suppose I assign the main
 db to
 thread 1 and the associated db to thread 2. Can I share the
 connection
 across the 2 threads if each thread works exclusively in its own
 db?

 I am aware that the connection is generally not threadsafe, but
 will
 it work if the two threads don't operate on the same db at the  
 same
 time?

 As of 3.5, sqlite connections are threadsafe by default. With
 earlier versions, this trick will not work.

 Dan.


 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] multiple writers for in-memory datastore

2008-04-18 Thread James Gregurich
hi!


I need to set up multiple writers to an in-memory datastore. I just  
discovered that you can't have more than one connection to an in- 
memory store.

I can give each task its own independent datastore if there is a way I  
can merge the contents of each store into a central store. Is there a  
way to attach an existing  in-memory store to another in-memory store?

If not, how hard would it be to modify the sqlite source to allow such  
an attachment to be made given the two connection pointers to two  
independent stores?


thanks,
James Gregurich
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple writers for in-memory datastore

2008-04-18 Thread James Gregurich

If the sqlite statement had a temporary storage area so that I could  
load up a bunch of rows and then commit them in one shot so that the  
lock on the db was not held very long by a single transaction, that  
would probably work.

However, my reading of the documentation leads me to believe that  
using the bind functions in a loop with an insert statement will lock  
the entire in-memory DB until the bulk insert is donewhich means I  
would get no benefit from concurrency.

Is this correct?

BTW: does the question I posed on modifying the library to add a  
feature to attach  an in-memory data stores to another one via the C  
API belong on the sqlite-dev list?

thanks,
James

On Apr 18, 2008, at 9:43 :22AM, John Stanton wrote:

 Just use a thread as a DB handler.  Queue transactions to it using  
 some
 IPC mechanism like a message queue or named pipe.  Another way would  
 be
 to synchronize access to the DB handle using a mutex.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple writers for in-memory datastore

2008-04-18 Thread James Gregurich


I'm working on a commercial, boxed, desktop product. I can't be  
creating new mounted disks on a customer's system every time he uses  
my application.



How about this...


suppose I create a temporary db file on disk. Each task ( a thread)  
opens a connection to the temp file and attaches an in-memory db to  
it. The task then writes to tables in the attached in-memory db. When  
the task is done, the tables in the in-memory db are merged into the  
disk file and the attached in-memory db is closed. reader connections  
would only read from the disk file.

Will such a design give me full concurrency on my writer tasks until  
they are ready to flush their results to the disk file? As I  
understand it, the attached db won't be locked by reading done on the  
disk file.


thanks,
James



On Apr 18, 2008, at 10:33 :39AM, Dennis Cote wrote:

 James Gregurich wrote:
 If the sqlite statement had a temporary storage area so that I could
 load up a bunch of rows and then commit them in one shot so that the
 lock on the db was not held very long by a single transaction, that
 would probably work.


 Using a RAM disk you could insert rows into one database as they are
 generated. This would be your batch.

 Then periodically attach that database to the main database and copy  
 all
 the new rows to the main DB table in one in a auto transaction.

 attach batch.db as batch;
 begin;
 insert into main.tbl select * from batch.tbl;
 delete from batch.tbl;
 commit;
 detach batch;

 This will only lock the main database for a short period while it is
 updated.

 However, my reading of the documentation leads me to believe that
 using the bind functions in a loop with an insert statement will lock
 the entire in-memory DB until the bulk insert is donewhich  
 means I
 would get no benefit from concurrency.

 Is this correct?

 Readers are blocked by a writer until the write transaction is  
 committed.


 BTW: does the question I posed on modifying the library to add a
 feature to attach  an in-memory data stores to another one via the C
 API belong on the sqlite-dev list?


 That list doesn't get much traffic. Your question was fine here.

 It would be fairly involved to change the handling of in memory
 databases. They don't have names to use with the attach command, and
 they don't do any locking since they can only be accessed from a  
 single
 connection currently. The locking in SQLite is done with POSIX file
 locks which can't be used for in memory databases since they aren't
 files. You're welcome to try of course, but it seems like a lot of  
 work
 for little return when there are other ways to do what you want.

 HTH
 Dennis Cote


 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple writers for in-memory datastore

2008-04-18 Thread James Gregurich

On Apr 18, 2008, at 1:25 :36PM, Dennis Cote wrote:

 James Gregurich wrote:

 suppose I create a temporary db file on disk. Each task ( a thread)
 opens a connection to the temp file and attaches an in-memory db to
 it.

 You will have to open the memory database and attach the db file since
 SQLite can't attach to a memory database.

is this information wrong?  
http://www.blitzbasic.com/Community/posts.php?topic=60981



 Why have you proposed to use a temporary database file? Can this  
 data be
 destroyed between executions?

yes. the nature of the application is such that the data is loaded  
from a proprietary file format, processed, and presented to the user.  
There is no need to store the data back on disk.

I'm interested in a flexible, convenient, in-memory datastore. I  
thought it was going to work but was stopped dead in my tracks when I  
realized I couldn't open multiple connections on an in-memory db.



 The way I have suggested, the readers only open the database file.  
 They
 can read as long as no update is in progress. The updates will be
 batched into the independent memory database. During an update the
 database file will be locked, so readers will have to wait.

great. that will get me concurrency at the expense of unnecessary disk  
I/O. I suppose I will have to decide if using sqlite is worth the price.

I may just fall back to defining a table as a std::vector of  
std::map entries and skip the whole idea of using an embedded db  
engine.

I'll have to think about all of this.


thanks,
James
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple writers for in-memory datastore

2008-04-18 Thread James Gregurich

On Apr 18, 2008, at 2:33 :32PM, Dennis Cote wrote:

 To share an attached database the threads must be able to name it, and
 this is only possible with a file database.

you could change the open() function to be able to assign a name to an  
in-memory db and then keep a mapping of all the names internally. You  
could also provide an API call that takes an existing connection to an  
in-memory store and attaches its db to another pre-existing db on  
another connection. Seems like the underlying foundation is already  
there to do it. But, I admit, I have no knowledge of the  
implementation details of SQLite.




 Perhaps you can replace the proprietary file format with a permanent
 SQLite database file (and then again maybe not).

We don't control those formats. they are controlled by certain large,  
well-known software companies. we just reverse-engineer their formats.


 You could implement a server thread that accesses a single memory
 database which accepts commands from, and passes the results back to,
 your other threads as John suggested. You will have to provide some  
 form
 of resource management for the shared resource, whether it is a shared
 memory database, file, or something else.

unless I misunderstand the way the SQLite API works, that isn't really  
practical.

my task is to read a chunk of data, parse it and insert a record into  
table ( a number of records in a loop ofcourse). To do that, I have to  
prepare a statement and then bind data values to the to the statement  
in a loop.

Once I begin the transaction and prepare the statement, the entire db  
is locked up for the duration of the bulk insert.  If that is true,  
then I'll lose all opportunity for parallelism.

If I have to write my own temporary storage containers to hold data  
while it waits to be committed by a datastore thread, then I might as  
well just write my own containers and be done with the task rather  
than going to the expense of using a SQL data store.

One reason to use SQLite is that it would take care of the  
synchronization of multiple writers and readers for me. If I have to  
write all that myself, then why bother with SQLite?


On of my options is to use CoreData on the macintosh. That will do  
what I want as it caches record inserts and does one big commitand  
it handles the synchronization.  However, what I do do with the  
lovable Windows platform?

oh well. Ill figure it all out some how.





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple writers for in-memory datastore

2008-04-18 Thread James Gregurich

I'll ask this question. The answer is probably no, but I'll ask it  
for the sake of completeness.


Suppose I created an in-memory db. I use the attach command to  
associate an additional in-memory db. Suppose I assign the main db to  
thread 1 and the associated db to thread 2. Can I share the connection  
across the 2 threads if each thread works exclusively in its own db?

I am aware that the connection is generally not threadsafe, but will  
it work if the two threads don't operate on the same db at the same  
time?


thanks,
James


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users