Re: [sqlite] Re-try logic on SQLITE_BUSY/deadlocked?

2009-05-22 Thread P Kishor
On Sat, May 23, 2009 at 6:34 AM, Rosemary Alles  wrote:
>
> Thanks Simon. I have been leaning that way too - considering switching.
>
> -rosemary.
>
> On May 22, 2009, at 5:55 PM, Simon Slavin wrote:
>
>>
>> On 23 May 2009, at 12:10am, Rosemary Alles wrote:
>>
>>> Multiple machines with multiple cpus. [snip]
>>
>>> The total size of
>>> current DB is up to 70mb.
>>
>> I suspect you'd be better off with MySQL.  (Am I allowed to say that
>> here ?)  See the last page of
>>
>> 
>>
>> MySQL runs as a service which can be connected to over the internet.
>> It runs all the time, whether anything is talking to it or not.
>> Everything that wants to change the database does it by talking to the
>> same server.  Consequently, the server can do its own change-caching,
>> keep indices in memory, and do the many other things that can be done
>> when you don't have to worry about other people accessing the files on
>> disk.  And it's designed to cope well with access from many clients
>> concurrently: the server doesn't need the client to do busy/waiting,
>> it just gives you the most up-to-date answers it has.
>>
>> At work, where I can run servers and need 24/7 uptime and concurrent
>> access from multiple clients I use MySQL.  At home where I want tiny/
>> fast/simple/embeddable/non-server I use SQLite.
>>
>> Fortunately, it's relatively easy to export from sqlite3 and import
>> into MySQL, or vice versa by exporting the database as a set of SQL
>> commands (.dump in sqlite3) and making minor adjustments.  And the
>> basic installation of MySQL (all you need) is free.
>>
>> I'm sorry if discussion of MySQL is forbidden here, but it sounds like
>> the right solution for this poster.

Suggesting a better alternative is definitely a very good advice, and
should be evaluated per one's needs. My advice would be to consider
Postgres instead of MySQL as an alternative. Pg is generally
considered a better database than MySQL, but subjective criteria
aside, Pg is also licensed with a better, more flexible licensing
terms, and since Pg was the inspiration for SQLite, you are likely to
find more compatibilities between the two.


>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Puneet Kishor http://www.punkish.org/
Carbon Model http://carbonmodel.org/
Charter Member, Open Source Geospatial Foundation http://www.osgeo.org/
Science Commons Fellow, Geospatial Data http://sciencecommons.org
Nelson Institute, UW-Madison http://www.nelson.wisc.edu/
---
collaborate, communicate, compete
===
Sent from Lucknow, UP, India
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Re-try logic on SQLITE_BUSY/deadlocked?

2009-05-22 Thread Olaf Schmidt

"Rosemary Alles"  schrieb im
Newsbeitrag news:57c55bd7-8d56-4913-adce-cbb2978dd...@ipac.caltech.edu...

> > What do you mean with "cores"?
> > Are these multiple client *machines* - or do we talk about
> > a sinlge client-machine with multiple (cpu-)cores here?
>
> Multiple machines with multiple cpus.
Ah Ok, that clears things up a bit.

> > In case you mean only "several cores" on a single client-machine -
> > why is your DB "behind" NFS at all (and not on a local disk)?
>
> N/A - see above - several machines.
So what you really want is a DB-Server implementation
in that case - and that's what SQLite does not support
"out of the box".

Working in "faked server-mode" against a share is in no
way satisfying under heavy "write-load", because neither
the smb-protocol nor NFS are well suited for DB-typical
transfers.

Nonetheless one can wrap the SQLite-engine behind a
dedicated AppServer-instance (which works always
locally against its SQLite-DB-Files, "visible" directly
only to that Server-Instance or -Host).

And if the communication against this Server-instance
is then done over sockets again, but with better suiting
protocols (transferring "disconnected" Resultsets - and
splitting up your larger writejobs against the Server-instance
into smaller pieces), then SQLite performes very well -
as I already posted in an earlier reply to you.

> We achieve neither, they are both (read and write - not done
> simultaneously, i.e. never read when writing and vice versa) ...
That's normal in SQLite. If there's a "Writer-lock" currently, then
also the Readers (Reader-Threads or -Processes) are blocked.
But in case you are working locally against your DB-File,
these Writes (when done in smaller chunks) usually block
the DB (and eventually waiting Readers) only for milliseconds.

And in your case (using NFS or SMB) the whole locking -
and also the release of these locks is very costly.
So I'm not that sure, if the main-culprit here is your
current busy-handling (although you could probably speed
up the whole thing a bit, when you find a better-working
"wait-strategy" in case of sqlite_busy-errors) - but first you
need to get rid of these protocols - and use SQLite locally,
from inside a (multithreaded) Server-instance on a dedicated
Host.

> > That does not mean, that your Inserts will be slow (if only
> > one process/thread or core will handle them) - after all you
> > perform your DB-writes against a single resource (your disk).
> > And whilst working against a local disk, SQLite can achieve
> > ca. 5-20 inserts per second, depending of course
> > on the Column-Count and if the underlying table has indexes
> > defined on its columns. In my tests I can achieve ca.
> > 12 inserts per second on a table with 8 "mixed-type"
> > Columns (having no indexes defined on it - normal 7200rpm
> > SATA-HardDisk).
>
> Obviously, we are doing something weird, or our NFS
> configuration is strange or whatever.
I'd say, what you currently see is "normal" (more or less).

> The disk is obviously not local
Yep - I was aware of that - what I wanted to show with
the timings above was, what timeouts you should expect
in case you do smaller Write-Jobs against your DB in
a locally working Server-instance (against a local Disk).
Insert 1000 new Records? - will block the Readers for
ca. 10-20msec - then reading can be done again in
parallel by the reader-threads.
And the usual relation between read- and write-requests
is more 90/10 in a typical "business-DB"-scenario (if not
less) - and also these just mentioned "1000 Inserts at once"
don't come up that often.

> The insert speeds you specify are more than likely
> heavily dependent on configuration/ setup.
> Are you also working on multiple machines with multiple
> cpus over NFS?
No - this is (more or less) the "out-of-the-box" performance
of the SQLite-Engine, running on a "desktop-OS", working
against a normal SATA-Drive - wrapping the Inserts in
a Transaction - and using the bind-APIs to fill in the
"Param-Slots" of the next "Insert-Record".

As said - that's what you can expect, if you find a
good AppServer-implementation which wraps the
engine and works locally against the HardDisks.

> In this case, on a local disk, much less than a second
> per update.
See - there you are... ;-)

> However, scaled over the previously described
> "concurrent" scenario with several identical copies
> of the process (program) attempting to access the
> database over NFS from several computers with
> several cpus  - up to a minute per update - It's ridiculous.
Yep - as said - either find a good SQLite-(App-)Server-
wrapper for unixoid systems (maybe Real-SQLServer
or John Stantons implementation - or the Network-
engines which are mentioned in the SQLite-Wiki) -
or just switch to MySQL or PostgreSQL.

But reduce your expectations when working over
SMB or NFS with sqlite.

> Don't have those numbers handy, but will soon. The total
> size of current DB is up to 70mb.
Oh - in that case (and from 

Re: [sqlite] BLOB Literals and NULL

2009-05-22 Thread John Machin
On 23/05/2009 10:56 AM, Greg and Tracy Hill wrote:
> Is it possible to use the BLOB literal sytax X'ABCD' with encoded NULLs?
> 
> For example:
> 
> INSERT INTO mytable VALUES (X'BADF00D0')

I see no NULL here, encoded or otherwise.
I don't even see an ASCII NUL here; NUL is character terminology, 
whereas the topic is BLOBs, and BLOBs are *B*inary.

> When I do this, the length of the data is 2 instead of the expected 4.

Well, if you concatenate a BLOB containing x'00' with a TEXT item, 
you'll get NULs:

sqlite> select length(x'');
4
sqlite> select length('xxx' || x'' || 'zzz');
3
sqlite>

Don't do such things; treat BLOBs as *B*inary objects.

HTH,
John
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BLOB Literals and NULL

2009-05-22 Thread Simon Slavin

On 23 May 2009, at 1:56am, Greg and Tracy Hill wrote:

> Is it possible to use the BLOB literal sytax X'ABCD' with encoded  
> NULLs?

My understanding is that you can store anything you want as a BLOB.   
But that when you read that value back again, you'll get a BLOB back.   
You can't write a value as one type but read it back as another  
(except for coercion from INTEGER to REAL, etc.).

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BLOB Literals and NULL

2009-05-22 Thread Igor Tandetnik
"Greg and Tracy Hill" 
wrote in message news:bay112-w186a5fad8703c04c8f30c0dd...@phx.gbl
> Is it possible to use the BLOB literal sytax X'ABCD' with encoded
> NULLs?

Yes.

> For example:
>
> INSERT INTO mytable VALUES (X'BADF00D0')
>
> When I do this, the length of the data is 2 instead of the expected 4.

How do you determine this? Not with strlen, I hope.

select length(X'BADF00D0');
4

Igor Tandetnik



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Re-try logic on SQLITE_BUSY/deadlocked?

2009-05-22 Thread Rosemary Alles

Thanks Simon. I have been leaning that way too - considering switching.

-rosemary.

On May 22, 2009, at 5:55 PM, Simon Slavin wrote:

>
> On 23 May 2009, at 12:10am, Rosemary Alles wrote:
>
>> Multiple machines with multiple cpus. [snip]
>
>> The total size of
>> current DB is up to 70mb.
>
> I suspect you'd be better off with MySQL.  (Am I allowed to say that
> here ?)  See the last page of
>
> 
>
> MySQL runs as a service which can be connected to over the internet.
> It runs all the time, whether anything is talking to it or not.
> Everything that wants to change the database does it by talking to the
> same server.  Consequently, the server can do its own change-caching,
> keep indices in memory, and do the many other things that can be done
> when you don't have to worry about other people accessing the files on
> disk.  And it's designed to cope well with access from many clients
> concurrently: the server doesn't need the client to do busy/waiting,
> it just gives you the most up-to-date answers it has.
>
> At work, where I can run servers and need 24/7 uptime and concurrent
> access from multiple clients I use MySQL.  At home where I want tiny/
> fast/simple/embeddable/non-server I use SQLite.
>
> Fortunately, it's relatively easy to export from sqlite3 and import
> into MySQL, or vice versa by exporting the database as a set of SQL
> commands (.dump in sqlite3) and making minor adjustments.  And the
> basic installation of MySQL (all you need) is free.
>
> I'm sorry if discussion of MySQL is forbidden here, but it sounds like
> the right solution for this poster.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] BLOB Literals and NULL

2009-05-22 Thread Greg and Tracy Hill

Is it possible to use the BLOB literal sytax X'ABCD' with encoded NULLs?

For example:

INSERT INTO mytable VALUES (X'BADF00D0')

When I do this, the length of the data is 2 instead of the expected 4.

I know I could use prepared statements but the library I am using doesn't 
expose them.

Any Ideas?

- Greg





_
Internet explorer 8 lets you browse the web faster.
http://go.microsoft.com/?linkid=9655582
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Re-try logic on SQLITE_BUSY/deadlocked?

2009-05-22 Thread Simon Slavin

On 23 May 2009, at 12:10am, Rosemary Alles wrote:

> Multiple machines with multiple cpus. [snip]

> The total size of
> current DB is up to 70mb.

I suspect you'd be better off with MySQL.  (Am I allowed to say that  
here ?)  See the last page of



MySQL runs as a service which can be connected to over the internet.   
It runs all the time, whether anything is talking to it or not.   
Everything that wants to change the database does it by talking to the  
same server.  Consequently, the server can do its own change-caching,  
keep indices in memory, and do the many other things that can be done  
when you don't have to worry about other people accessing the files on  
disk.  And it's designed to cope well with access from many clients  
concurrently: the server doesn't need the client to do busy/waiting,  
it just gives you the most up-to-date answers it has.

At work, where I can run servers and need 24/7 uptime and concurrent  
access from multiple clients I use MySQL.  At home where I want tiny/ 
fast/simple/embeddable/non-server I use SQLite.

Fortunately, it's relatively easy to export from sqlite3 and import  
into MySQL, or vice versa by exporting the database as a set of SQL  
commands (.dump in sqlite3) and making minor adjustments.  And the  
basic installation of MySQL (all you need) is free.

I'm sorry if discussion of MySQL is forbidden here, but it sounds like  
the right solution for this poster.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Re-try logic on SQLITE_BUSY/deadlocked?

2009-05-22 Thread Rosemary Alles
if I use BEGIN_EXCLUSIVE for replace (i.e. write) transactions ann  
receive a BUSY state for -say- one of two "concurrent" hits,  is a  
rollback  required? I think not? I can simply re-try can I not? My  
thinking is that no "work" has been performed, hence a retry will be  
sufficient?

-rosemary.



On May 22, 2009, at 4:10 PM, Rosemary Alles wrote:

> Dear Olaf,
>
> On May 22, 2009, at 3:21 PM, Olaf Schmidt wrote:
>
>>
>> "Rosemary Alles"  schrieb im
>> Newsbeitrag
>> news:f113017d-8851-476d-8e36-56b2c4165...@ipac.caltech.edu...
>>
>>> I have a database (simple schema) with two tables on which I
>>> perform  "concurrent" udpates over NFS ...
>>> ...
>>> Large updates, distributed over several cores over NFS -
>>> supposedly  concurrent "but not really"?
>> Could you give some more detailed background-info about
>> your current scenario?
>>
>> What do you mean with "cores"?
>> Are these multiple client *machines* - or do we talk about
>> a sinlge client-machine with multiple (cpu-)cores here?
>>
>
> Multiple machines with multiple cpus.
>
>
>> In case you mean only "several cores" on a single client-machine -
>> why is your DB "behind" NFS at all (and not on a local disk)?
>>
>
> N/A - see above - several machines.
>
>> In either case (be it multiple client-machines which talk to your
>> DB - or just multiple cores on a (capable but) single client-
>> machine - you will not achieve faster inserts against your DB
>> by working concurrently (regarding the write-direction to the
>> SQLite-DB).
>> SQLite can profit from multiple cores (threads) only in the
>> Read-Direction.
>
>
> We achieve neither, they are both (read and write - not done
> simultaneously, i.e. never read when writing and vice versa) god awful
> slow.
>
>>
>>
>> That does not mean, that your Inserts will be slow (if only
>> one process/thread or core will handle them) - after all you
>> perform your DB-writes against a single resource (your disk).
>> And whilst working against a local disk, SQLite can achieve
>> ca. 5-20 inserts per second, depending of course
>> on the Column-Count and if the underlying table has indexes
>> defined on its columns. In my tests I can achieve ca.
>> 12 inserts per second on a table with 8 "mixed-type"
>> Columns (having no indexes defined on it - normal 7200rpm
>> SATA-HardDisk).
>>
>
> Obviously, we are doing something weird, or our NFS configuration is
> strange or whatever. The disk is obviously not local The insert speeds
> you specify are more than likely heavily dependent on configuration/
> setup. Are you also working on multiple machines with multiple cpus
> over NFS?
>
>> So, what timings do you currently get, in case you perform
>> your updates only running on one single client (or core)?
>
> In this case, on a local disk, much less than a second per update.
> However, scaled over the previously described "concurrent" scenario
> with several identical copies of the process (program) attempting to
> access the database over NFS from several computers with several cpus
> - up to a minute per update - It's ridiculous.
>
>>
>> And could you please check the DB-Size before and after
>> such a typical "update-job" (so that we get an impression
>> about the transferred byte-volume - maybe you could also
>> give the count of new records in case of insert-jobs)?
>>
>
> Don't have those numbers handy, but will soon. The total size of
> current DB is up to 70mb.
>
>> And do you work over GBit-Ethernet (or an even faster
>> "NFS-channel")?
>>
>
> The latter.
>
> My primary concern now is to prevent a dead-lock.
>
> Many thanks,
> rosemary.
>
>> Regards,
>>
>> Olaf Schmidt
>>
>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Re-try logic on SQLITE_BUSY/deadlocked?

2009-05-22 Thread Rosemary Alles
Using busy_timeout  in itself won't do the job. From what I'm  
gathering I need to rollback the transaction that returned BUSY, reset  
the statement and retry once the database is not BUSY anymore?I  
probably also don't want the default BEGIN. I probably want either  
IMMEDIATE or EXCLUSIVE. Additionally I also probably want logic to  
distinguish between deadlock BUSY and "regular" BUSY, unless IMMEDIATE/ 
EXCLUSIVE guarantee against a dead-lock.


On May 22, 2009, at 2:19 PM, Steven Fisher wrote:

> http://www.sqlite.org/c3ref/busy_timeout.html

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Re-try logic on SQLITE_BUSY/deadlocked?

2009-05-22 Thread Rosemary Alles
Dear Olaf,

On May 22, 2009, at 3:21 PM, Olaf Schmidt wrote:

>
> "Rosemary Alles"  schrieb im
> Newsbeitrag  
> news:f113017d-8851-476d-8e36-56b2c4165...@ipac.caltech.edu...
>
>> I have a database (simple schema) with two tables on which I
>> perform  "concurrent" udpates over NFS ...
>> ...
>> Large updates, distributed over several cores over NFS -
>> supposedly  concurrent "but not really"?
> Could you give some more detailed background-info about
> your current scenario?
>
> What do you mean with "cores"?
> Are these multiple client *machines* - or do we talk about
> a sinlge client-machine with multiple (cpu-)cores here?
>

Multiple machines with multiple cpus.


> In case you mean only "several cores" on a single client-machine -
> why is your DB "behind" NFS at all (and not on a local disk)?
>

N/A - see above - several machines.

> In either case (be it multiple client-machines which talk to your
> DB - or just multiple cores on a (capable but) single client-
> machine - you will not achieve faster inserts against your DB
> by working concurrently (regarding the write-direction to the
> SQLite-DB).
> SQLite can profit from multiple cores (threads) only in the
> Read-Direction.


We achieve neither, they are both (read and write - not done  
simultaneously, i.e. never read when writing and vice versa) god awful  
slow.

>
>
> That does not mean, that your Inserts will be slow (if only
> one process/thread or core will handle them) - after all you
> perform your DB-writes against a single resource (your disk).
> And whilst working against a local disk, SQLite can achieve
> ca. 5-20 inserts per second, depending of course
> on the Column-Count and if the underlying table has indexes
> defined on its columns. In my tests I can achieve ca.
> 12 inserts per second on a table with 8 "mixed-type"
> Columns (having no indexes defined on it - normal 7200rpm
> SATA-HardDisk).
>

Obviously, we are doing something weird, or our NFS configuration is  
strange or whatever. The disk is obviously not local The insert speeds  
you specify are more than likely heavily dependent on configuration/ 
setup. Are you also working on multiple machines with multiple cpus  
over NFS?

> So, what timings do you currently get, in case you perform
> your updates only running on one single client (or core)?

In this case, on a local disk, much less than a second per update.  
However, scaled over the previously described "concurrent" scenario  
with several identical copies of the process (program) attempting to  
access the database over NFS from several computers with several cpus  
- up to a minute per update - It's ridiculous.

>
> And could you please check the DB-Size before and after
> such a typical "update-job" (so that we get an impression
> about the transferred byte-volume - maybe you could also
> give the count of new records in case of insert-jobs)?
>

Don't have those numbers handy, but will soon. The total size of
current DB is up to 70mb.

> And do you work over GBit-Ethernet (or an even faster
> "NFS-channel")?
>

The latter.

My primary concern now is to prevent a dead-lock.

Many thanks,
rosemary.

> Regards,
>
> Olaf Schmidt
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Re-try logic on SQLITE_BUSY/deadlocked?

2009-05-22 Thread Olaf Schmidt

"Rosemary Alles"  schrieb im
Newsbeitrag news:f113017d-8851-476d-8e36-56b2c4165...@ipac.caltech.edu...

> I have a database (simple schema) with two tables on which I
> perform  "concurrent" udpates over NFS ...
> ...
> Large updates, distributed over several cores over NFS -
> supposedly  concurrent "but not really"?
Could you give some more detailed background-info about
your current scenario?

What do you mean with "cores"?
Are these multiple client *machines* - or do we talk about
a sinlge client-machine with multiple (cpu-)cores here?

In case you mean only "several cores" on a single client-machine -
why is your DB "behind" NFS at all (and not on a local disk)?

In either case (be it multiple client-machines which talk to your
DB - or just multiple cores on a (capable but) single client-
machine - you will not achieve faster inserts against your DB
by working concurrently (regarding the write-direction to the
SQLite-DB).
SQLite can profit from multiple cores (threads) only in the
Read-Direction.

That does not mean, that your Inserts will be slow (if only
one process/thread or core will handle them) - after all you
perform your DB-writes against a single resource (your disk).
And whilst working against a local disk, SQLite can achieve
ca. 5-20 inserts per second, depending of course
on the Column-Count and if the underlying table has indexes
defined on its columns. In my tests I can achieve ca.
12 inserts per second on a table with 8 "mixed-type"
Columns (having no indexes defined on it - normal 7200rpm
SATA-HardDisk).

So, what timings do you currently get, in case you perform
your updates only running on one single client (or core)?
And could you please check the DB-Size before and after
such a typical "update-job" (so that we get an impression
about the transferred byte-volume - maybe you could also
give the count of new records in case of insert-jobs)?

And do you work over GBit-Ethernet (or an even faster
"NFS-channel")?

Regards,

Olaf Schmidt



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Re-try logic on SQLITE_BUSY/deadlocked?

2009-05-22 Thread Steven Fisher
On 22-May-2009, at 1:11 PM, Rosemary Alles wrote:

> Does anyone have solid code examples (in C/C++) or pseudo code of how
> to establish re-try code/logic successfully?

Just use sqlite3_busy_timeout.
http://www.sqlite.org/c3ref/busy_timeout.html
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Re-try logic on SQLITE_BUSY/deadlocked?

2009-05-22 Thread Igor Tandetnik
"Rosemary Alles"  wrote
in message news:f113017d-8851-476d-8e36-56b2c4165...@ipac.caltech.edu
> What exactly is the difference between BEGIN_IMMEDIATE and
> BEGIN_EXCLUSIVE?

http://sqlite.org/atomiccommit.html

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] preserve column constraints

2009-05-22 Thread Igor Tandetnik
"ed"  wrote in message
news:d92e6441090522n431d4434gee6a286aa4b8d...@mail.gmail.com
> Does  'INSERT INTO mem_db SELECT * FROM file_db' work the same as
> 'CREATE TABLE mem_db AS SELECT * FROM file_db'  with regard to
> transactions? Are the inserts going to be handled as one large
> transaction in either scenario?

Yes.

> Do I need to explicitly use the BEGIN
> and COMMIT commands?

You may want to, if you want CREATE TABLE and INSERT to happen together 
atomically. You don't need to just for INSERT.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Re-try logic on SQLITE_BUSY/deadlocked?

2009-05-22 Thread Rosemary Alles
Hullo all,

Does anyone have solid code examples (in C/C++) or pseudo code of how  
to establish re-try code/logic successfully?

I have a database (simple schema) with two tables on which I perform  
"concurrent" udpates over NFS (yes, terrible I know - but it's what we  
have). Eventually, I get into a dead-locked (I think) situation with a  
hot journal. Only updates are done.  At present we don't have any re- 
try logic, which is pretty dumb, but we are new to Sqlite3 and  
erroneously assumed it does the re-try logic on its own.

Any pointers will be helpful.

I'm starting to think that sqlite3 is not a good fit for our purposes.  
Large updates, distributed over several cores over NFS - supposedly  
concurrent "but not really"?

Also:
What exactly is the difference between BEGIN_IMMEDIATE and  
BEGIN_EXCLUSIVE?


Many thanks,
-rosemary.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] preserve column constraints

2009-05-22 Thread Igor Tandetnik
"ed"  wrote in message
news:d92e64410905221106t5c8b77eay9cbc5a4e3ccb8...@mail.gmail.com
> I guess i was assuming the command "create table x as select * from y"
> should preserve column constraints

Well, it can't, in general. Nothing says that a column in the newly 
created table must correspond to a column in existing table. Consider:

create table x as
select 1, a, b+1, c+d from y;

What constraints should be placed on the four columns of table x, and 
why?

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] preserve column constraints

2009-05-22 Thread ed
Does  'INSERT INTO mem_db SELECT * FROM file_db' work the same as
'CREATE TABLE mem_db AS SELECT * FROM file_db'  with regard to
transactions? Are the inserts going to be handled as one large
transaction in either scenario? Do I need to explicitly use the BEGIN
and COMMIT commands?

thanks,
ed

On Fri, May 22, 2009 at 10:50 AM, Igor Tandetnik  wrote:
> "ed"  wrote in message
> news:d92e64410905221021p39a4f833y960370145e187...@mail.gmail.com
>> I have an application that copies my disk based db into an in memory
>> table (":memory:") using the command
>> "create table mem_db as select * from file_db;"
>>
>> The problem is the disk db has a column set to INTEGER PRIMARY KEY,
>> but this is not preserved when the new in memory db is created. When i
>> do a "select * from sqlite_master;" on the in-memory db, there is no
>> IPK. How can I accomplish creating the in-memory db with the file db
>> data and preserve the column configuration from the original table?
>
> You create the table the way you want it, then run INSERT ... SELECT
> statement to populate it.
>
> You may also find this interesting:
> http://sqlite.org/c3ref/backup_finish.html
>
> Igor Tandetnik
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
thanks,
ed
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] preserve column constraints

2009-05-22 Thread ed
ok, thanks
I guess i was assuming the command "create table x as select * from y"
should preserve column constraints and maybe i was going about it
wrong.

thanks for the link to the backup api, i will see if this makes sense
for my application.

thanks,
ed

On Fri, May 22, 2009 at 10:50 AM, Igor Tandetnik  wrote:
> "ed"  wrote in message
> news:d92e64410905221021p39a4f833y960370145e187...@mail.gmail.com
>> I have an application that copies my disk based db into an in memory
>> table (":memory:") using the command
>> "create table mem_db as select * from file_db;"
>>
>> The problem is the disk db has a column set to INTEGER PRIMARY KEY,
>> but this is not preserved when the new in memory db is created. When i
>> do a "select * from sqlite_master;" on the in-memory db, there is no
>> IPK. How can I accomplish creating the in-memory db with the file db
>> data and preserve the column configuration from the original table?
>
> You create the table the way you want it, then run INSERT ... SELECT
> statement to populate it.
>
> You may also find this interesting:
> http://sqlite.org/c3ref/backup_finish.html
>
> Igor Tandetnik
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
thanks,
ed
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] preserve column constraints

2009-05-22 Thread Igor Tandetnik
"ed"  wrote in message
news:d92e64410905221021p39a4f833y960370145e187...@mail.gmail.com
> I have an application that copies my disk based db into an in memory
> table (":memory:") using the command
> "create table mem_db as select * from file_db;"
>
> The problem is the disk db has a column set to INTEGER PRIMARY KEY,
> but this is not preserved when the new in memory db is created. When i
> do a "select * from sqlite_master;" on the in-memory db, there is no
> IPK. How can I accomplish creating the in-memory db with the file db
> data and preserve the column configuration from the original table?

You create the table the way you want it, then run INSERT ... SELECT 
statement to populate it.

You may also find this interesting: 
http://sqlite.org/c3ref/backup_finish.html

Igor Tandetnik



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] preserve column constraints

2009-05-22 Thread ed
Hello,

I have an application that copies my disk based db into an in memory
table (":memory:") using the command
"create table mem_db as select * from file_db;"

The problem is the disk db has a column set to INTEGER PRIMARY KEY,
but this is not preserved when the new in memory db is created. When i
do a "select * from sqlite_master;" on the in-memory db, there is no
IPK. How can I accomplish creating the in-memory db with the file db
data and preserve the column configuration from the original table?

I am using SQLite 3.3.4.

-- 
thanks,
ed
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] about sqlite on line back up API

2009-05-22 Thread Dan

On May 22, 2009, at 3:55 PM, pierr chen wrote:

> Hi,
>   I am using sqlite on Flash based embedded system in the following
> way intended to 1) improve the insert/update/delete performance   
> 2)reduce
> the Flash write numbers:
>
>  a .when system is up, load the flash database to the memory
>  b. the delete/update/insert operation is on the memory database
>  c. sync the memory based database to the flash periodically  
> (say 10
> seconds) using on line back up api .I was using loadOrSaveDb()   
> function
> list here.
>  http://www.sqlite.org/backup.html
>
> Compared with directly interacting with Flash based database ,the
> insert/update/delete performance is improved with no doubt. However,  
> I found
> following problems :
> a. It seems when using loadOrSaveDb() , the backup engine will  
> always
> copy the pages from memory database to the flash database even if  
> these two
> database has exactly the same data. If it were the case, then I can  
> not
> achieve my goal 2-reduce the Flash write number.

That's the nature of it at the moment unfortunately. The backup  
procedure
always copies every page of the source db.

>  b. There always a journey file exist when I reset the system. I  
> am
> thinking that If I reset the system after the loadOrSaveDb()  
> returned ,as
> shown below,there should be no journey file exist as the write has  
> finished.

Did you open the flash database with "PRAGMA locking_mode=exclusive", or
some other option that would turn on persistent journals?

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] about sqlite on line back up API

2009-05-22 Thread pierr chen
Hi,
   I am using sqlite on Flash based embedded system in the following
way intended to 1) improve the insert/update/delete performance  2)reduce
the Flash write numbers:

  a .when system is up, load the flash database to the memory
  b. the delete/update/insert operation is on the memory database
  c. sync the memory based database to the flash periodically (say 10
seconds) using on line back up api .I was using loadOrSaveDb()  function
list here.
  http://www.sqlite.org/backup.html

 Compared with directly interacting with Flash based database ,the
insert/update/delete performance is improved with no doubt. However, I found
following problems :
 a. It seems when using loadOrSaveDb() , the backup engine will always
copy the pages from memory database to the flash database even if these two
database has exactly the same data. If it were the case, then I can not
achieve my goal 2-reduce the Flash write number.
  b. There always a journey file exist when I reset the system. I am
thinking that If I reset the system after the loadOrSaveDb() returned ,as
shown below,there should be no journey file exist as the write has finished.

//a background thread
while(1) {
   call LoadOrSaveDb()  and returned
  sleep(10)   <-- reset the system in between ,say
5th second moment.
   }

 Thanks for your comments.



Pierr Chen
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users