Re: [sqlite] Proper way to transfer a live sqlite database

2007-06-19 Thread Dan Kennedy
On Tue, 2007-06-19 at 11:51 -0700, Gerry Snyder wrote:
> Michael Hooker wrote:
> > Many thanks for the explanation Dan.  
> Ditto the thanks.
> > I suspected the purpose of ROLLBACK was as you say, but couldn't see 
> > why it was used here.  You point out the "under the hood" difference 
> > between ROLLBACK and COMMIT, but what about END? My main (third-party, 
> > commercial) application may well have some data stored waiting to be 
> > fed into the database after the file has been copied, and if it is 
> > forced to discard its cache that presumably means these are lost, 
> > which wouldn't be a good idea. 

END is the same as COMMIT in sqlite.

> It shouldn't have to. The cache Dan was referring to was an internal 
> copy of (part of) what is already in the data base. If the data base 
> file has been updated, that copy has to be discarded, since it may not 
> be valid--of course, it may be valid, but figuring out whether it is 
> would be a lot more work than just rereading it. Anyhow, this is all 
> happening at a much lower level than the application data you are 
> referring to, which is still valid and should be entered into the file.

Right. If another app has some "writes" (dirty pages) in it's cache,
then it will already have at least a RESERVED lock on the database
file. If this is the case the "BEGIN IMMEDIATE" statement executed
by the copy-file process will fail to obtain it's EXCLUSIVE database
lock.

So the only logic the file-copy process needs is "Do not do the file
copy until after the BEGIN IMMEDIATE succeeds".

Dan.


> HTH,
> 
> Gerry
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
> 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: How to use pragmas from code?

2007-06-19 Thread Dan Kennedy
On Tue, 2007-06-19 at 15:39 -0400, Igor Tandetnik wrote:
> Shane Harrelson
> <[EMAIL PROTECTED]> wrote:
> > To use pragmas from code, do I simply prepare them as a regular SQL
> > statement and then execute them?
> 
> Yes.

Another thing to note: Some pragmas take effect during 
sqlite3_prepare(), not sqlite3_step() (examples: cache_size, 
temp_store). For this reason calling sqlite3_reset() and then
reusing a PRAGMA statement has the potential to produce 
confusing results (or at least SQLITE_SCHEMA errors).

Personally, I would use sqlite3_exec() to execute pragma statements
from C code.

Dan.

> > And when can they/should they be done?   As the first statement after
> > an open?
> 
> Some pragmas have to be set early, others may be changed at any time.
> 
> > Are the pragma values stored with the database?
> 
> Some pragmas affect the format of the database file - these are stored 
> in the database. Others only affect current connection - these are not 
> stored.
> 
> 
> Is there are particular pragma you are worrying about?
> 
> Igor Tandetnik 
> 
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
> 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Tcl and BLOB data

2007-06-19 Thread Jeff Godfrey

Very informative.  Thank you.

Jeff


- Original Message - 
From: <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, June 19, 2007 8:59 PM
Subject: Re: [sqlite] Tcl and BLOB data


"Jeff Godfrey" <[EMAIL PROTECTED]> wrote:


I don't really understand why, when my data has a binary 
representation and I'm trying to place it in a BLOB field,

the string representation is used instead (assuming it exists).



I wrote up a terse explanation on the Tclers wiki.  See

  http://wiki.tcl.tk/19627



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Recommend server for Windows?

2007-06-19 Thread Gilles Ganault

At 20:47 19/06/2007 -0500, John Stanton wrote:

Such a server can be made simpler then mine by making it single threaded.


Is it publicly available from http://www.viacognis.com?

Thanks
G.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Tcl and BLOB data

2007-06-19 Thread drh
"Jeff Godfrey" <[EMAIL PROTECTED]> wrote:
> 
> I don't really understand why, when my data has a binary 
> representation and I'm trying to place it in a BLOB field,
> the string representation is used instead (assuming it exists).
> 

I wrote up a terse explanation on the Tclers wiki.  See

   http://wiki.tcl.tk/19627


--
D. Richard Hipp <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Recommend server for Windows?

2007-06-19 Thread John Stanton

Gilles Ganault wrote:

At 16:49 19/06/2007 -0700, Medi Montaseri wrote:

While its difficult to tell what the problem statement (or context) 
is, but the ingrediants like HTTP and POST spells SOAP.



The context is that, until now, our apps were almost used on stand-alone 
hosts with only a few customers hosting the (small) SQLite database file 
on a shared drive on the LAN, so performance was just fine. Now, we have 
a customer whose DB file is about 50MB... and using a 10Mbps LAN, and it 
takes about 8 seconds for an INSERT.


So we have to find a solution ASAP, with minimal changes to our app, at 
least until we get around to rewriting the DB part so that it uses a 
location-independent connector.


Gilles.

Gilles, contact me at [EMAIL PROTECTED] and I might have some stuff 
which may help you perform a rapid transformation to a networked DB.

JS


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Recommend server for Windows?

2007-06-19 Thread John Stanton


Gilles Ganault wrote:

At 11:20 19/06/2007 -0400, Clay Dowling
 wrote:


I'm going to recommend PostgreSQL.



Thanks for the idea, but if possible, we'd rather something really 
basic, typically a single EXE. Besides, using eg. PostgreSQL would 
require rewriting our application.


I went through the list of servers on the wiki, and the following 
solutions look interesting:


http://users.libero.it/irwin/ : uSQLiteServer - An SQLite network wrapper

http://www.oneledger.co.uk/sql4sockets.html

dhRPCServer + dhSQLite-COM-Wrapper http://www.thecommon.net/8.html

http://www.protracksource.com/node/42 : "SQLite Server is a free 
database server that is the central data store for Pro Track Source 
Connected. It comes with your purchase of Pro Track Source Connected, 
but we also offer it here for a free download in case you have lost it." 
(CHECK if server can be used with any SQLite client)


=> BTW, I though of a simple solution: a web-like server process that 
uses HTTP for the protocol, and commands are sent with the POST method. 
HTTP is tried and true, and since we use TCP, the server can be 
configured to only accept a single connection at once, taking care of 
concurrency.


Does someone know if something like this has been done?

Thank you.

I wrote such a server.  It uses HTTP and embeds Sqlite.  It is multi 
threaded and works very effectively.  It handles file serving for 
regular WWW purposes, and RPC mechanism for AJAX style database access 
from a browser.  It also includes an embedded server pages processor.


Embedding Sqlite in such a server is an effective way of getting a 
highly eficient network database.  In operation the server barely 
registers CPU usage, an indication of the effectiveness of the approach. 
 I carefully use sendfile/TransmitFile for network traffic to get good 
throughput.


Such a server can be made simpler then mine by making it single threaded.


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Tcl and BLOB data

2007-06-19 Thread Jeff Godfrey


- Original Message - 
From: <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, June 19, 2007 3:43 PM
Subject: Re: [sqlite] Tcl and BLOB data


I did this change at your request, because it seemed like
a good enhancement.


Thanks.  I agree, it sounds like a good enhancement.  Though, I don't 
really understand why, when my data has a binary representation and 
I'm trying to place it in a BLOB field, the string representation is 
used instead (assuming it exists).  It seems that the fact I'm pushing 
the data into a BLOB field should force the selection of the binary 
representation.  That said, I'm sure I'm over-simplifying the 
situation.



How can we help you to do your own build?  It really is not that
hard.  What operating system are you using?


Hmmm... Maybe point me to some docs.  I seem to remember seeing some 
on the Wiki before.  In reality, I *have* tried to build the package 
before (quite some time ago), and don't think I had any luck.  I 
running on WinXP, though I have mingw, msys, and tcl installations. 
I'll give it another try if there's some current documentation 
available...


Thank you.

Jeff


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Recommend server for Windows?

2007-06-19 Thread Gilles Ganault

At 16:49 19/06/2007 -0700, Medi Montaseri wrote:
While its difficult to tell what the problem statement (or context) is, 
but the ingrediants like HTTP and POST spells SOAP.


The context is that, until now, our apps were almost used on stand-alone 
hosts with only a few customers hosting the (small) SQLite database file on 
a shared drive on the LAN, so performance was just fine. Now, we have a 
customer whose DB file is about 50MB... and using a 10Mbps LAN, and it 
takes about 8 seconds for an INSERT.


So we have to find a solution ASAP, with minimal changes to our app, at 
least until we get around to rewriting the DB part so that it uses a 
location-independent connector.


Gilles.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Recommend server for Windows?

2007-06-19 Thread Medi Montaseri
While its difficult to tell what the problem statement (or context) is,
but the ingrediants like HTTP and POST spells SOAP. 

-Original Message-
From: Gilles Ganault [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, June 19, 2007 4:32 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Recommend server for Windows?

At 11:20 19/06/2007 -0400, Clay Dowling
  wrote:
>I'm going to recommend PostgreSQL.

Thanks for the idea, but if possible, we'd rather something really
basic, typically a single EXE. Besides, using eg. PostgreSQL would
require rewriting our application.

I went through the list of servers on the wiki, and the following
solutions look interesting:

http://users.libero.it/irwin/ : uSQLiteServer - An SQLite network
wrapper

http://www.oneledger.co.uk/sql4sockets.html

dhRPCServer + dhSQLite-COM-Wrapper http://www.thecommon.net/8.html

http://www.protracksource.com/node/42 : "SQLite Server is a free
database server that is the central data store for Pro Track Source
Connected. It comes with your purchase of Pro Track Source Connected,
but we also offer it here for a free download in case you have lost it."
(CHECK if server can be used with any SQLite client)

=> BTW, I though of a simple solution: a web-like server process that
uses HTTP for the protocol, and commands are sent with the POST method.
HTTP is tried and true, and since we use TCP, the server can be
configured to only accept a single connection at once, taking care of
concurrency.

Does someone know if something like this has been done?

Thank you.



-
To unsubscribe, send email to [EMAIL PROTECTED]

-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Recommend server for Windows?

2007-06-19 Thread Gilles Ganault

At 11:20 19/06/2007 -0400, Clay Dowling
 wrote:

I'm going to recommend PostgreSQL.


Thanks for the idea, but if possible, we'd rather something really basic, 
typically a single EXE. Besides, using eg. PostgreSQL would require 
rewriting our application.


I went through the list of servers on the wiki, and the following solutions 
look interesting:


http://users.libero.it/irwin/ : uSQLiteServer - An SQLite network wrapper

http://www.oneledger.co.uk/sql4sockets.html

dhRPCServer + dhSQLite-COM-Wrapper http://www.thecommon.net/8.html

http://www.protracksource.com/node/42 : "SQLite Server is a free database 
server that is the central data store for Pro Track Source Connected. It 
comes with your purchase of Pro Track Source Connected, but we also offer 
it here for a free download in case you have lost it." (CHECK if server can 
be used with any SQLite client)


=> BTW, I though of a simple solution: a web-like server process that uses 
HTTP for the protocol, and commands are sent with the POST method. HTTP is 
tried and true, and since we use TCP, the server can be configured to only 
accept a single connection at once, taking care of concurrency.


Does someone know if something like this has been done?

Thank you.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Proper way to transfer a live sqlite database

2007-06-19 Thread Michael Hooker

Thank you, Gerry.

So basically, it seems, ROLLBACK, COMMIT or END would all do nicely in this 
particular context. Hopefully I'll never need to worry about locking etc 
because I have no intention of changing the data in the database and far> I've successfully managed to extract what I want from the live database 
using a SELECT statement with only the occasional "busy" message.  However, 
the prospect of what amounts to an automatic daily back-up and then working 
on the backup is much more comfortable.


I think I'm going to have to find some excuse for closing the data-provider 
down (next time Windows Update insists I have to re-boot, probably), make a 
quick backup copy of the inactive database in the normal way, turn the 
data-provider back on again and run an experimental program on the new live 
database.  If it doesn't work, switch off again, quickly restore the backup, 
get going again and  come back to you kind folk.  All in the wee small 
hours when the data-flow is at its lowest.


Just for interest, this is what it's all about: 
http://www.kinetic-avionics.co.uk/sbs-1.php


The software that processes the data from this clever piece of kit uses 
Sqlite3, but it has to be said that it's missing some vital features, one of 
them being the ability to produce a report of what happened between time A 
and time B.  It will produce a backup but this takes an age and is done by 
exporting each table in the database as a separate .csv file - and last time 
I tried it on a live database the reporting application just went into "not 
responding" mode.  So we have to do it ourselves.


Thanks again

Michael Hooker

http://www.AvPhotosOnline.org.uk
- Original Message - 
From: "Gerry Snyder" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, June 19, 2007 7:51 PM
Subject: Re: [sqlite] Proper way to transfer a live sqlite database



Michael Hooker wrote:

Many thanks for the explanation Dan.

Ditto the thanks.
I suspected the purpose of ROLLBACK was as you say, but couldn't see why 
it was used here.  You point out the "under the hood" difference between 
ROLLBACK and COMMIT, but what about END? My main (third-party, 
commercial) application may well have some data stored waiting to be fed 
into the database after the file has been copied, and if it is forced to 
discard its cache that presumably means these are lost, which wouldn't be 
a good idea.


It shouldn't have to. The cache Dan was referring to was an internal copy 
of (part of) what is already in the data base. If the data base file has 
been updated, that copy has to be discarded, since it may not be valid--of 
course, it may be valid, but figuring out whether it is would be a lot 
more work than just rereading it. Anyhow, this is all happening at a much 
lower level than the application data you are referring to, which is still 
valid and should be entered into the file.


HTH,

Gerry

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Tcl and BLOB data

2007-06-19 Thread drh
"Jeff Godfrey" <[EMAIL PROTECTED]> wrote:
> Interesting.  That sounds like exactly what I need.  I'm curious, did 
> the patch somehow arise from my query, or is the timing of the query 
> and the patch just coincidental?
> 
> Also, how soon would you expect this patch to make it into an 
> "official build"?  I've never built SQLite from the sources before, 
> and don't really have the time to work through the details right now.
> 

I did this change at your request, because it seemed like a good
enhancement.  The next "official build" will be 3.4.1.  Point
releases normally come every 1 to 2 months, and we just had 3.4.0,
so do not look for 3.4.1 until the end of July, realistically.

How can we help you to do your own build?  It really is not that
hard.  What operating system are you using?

--
D. Richard Hipp <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Tcl and BLOB data

2007-06-19 Thread Jeff Godfrey
Interesting.  That sounds like exactly what I need.  I'm curious, did 
the patch somehow arise from my query, or is the timing of the query 
and the patch just coincidental?


Also, how soon would you expect this patch to make it into an 
"official build"?  I've never built SQLite from the sources before, 
and don't really have the time to work through the details right now.


Thank you.

Jeff Godfrey

- Original Message - 
From: <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, June 19, 2007 12:17 PM
Subject: Re: [sqlite] Tcl and BLOB data


"Jeff Godfrey" <[EMAIL PROTECTED]> wrote:


Is there a way I can "force" SQLite to insert my data as a BLOB, 
even if the containing variable has a string representation?  I 
tried to CAST the data to a BLOB during the insert, but the results 
were the same.




If you get the patch I just checked in

 http://www.sqlite.org/cvstrac/chngview?cn=4092

and if you use a "@" character instead of "$" in front of the
variable name, and the variable has a bytearray representation,
then the variable will be bound as a BLOB even if the variable
also has a text representation.

--
D. Richard Hipp <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Re: How to use pragmas from code?

2007-06-19 Thread Igor Tandetnik

Shane Harrelson
<[EMAIL PROTECTED]> wrote:

To use pragmas from code, do I simply prepare them as a regular SQL
statement and then execute them?


Yes.


And when can they/should they be done?   As the first statement after
an open?


Some pragmas have to be set early, others may be changed at any time.


Are the pragma values stored with the database?


Some pragmas affect the format of the database file - these are stored 
in the database. Others only affect current connection - these are not 
stored.



Is there are particular pragma you are worrying about?

Igor Tandetnik 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] How to use pragmas from code?

2007-06-19 Thread Shane Harrelson

To use pragmas from code, do I simply prepare them as a regular SQL
statement and then execute them?

And when can they/should they be done?   As the first statement after an
open?

Are the pragma values stored with the database?  Or do they have to be
issued after each open?

Thanks.
-Shane


Re: [sqlite] Proper way to transfer a live sqlite database

2007-06-19 Thread Gerry Snyder

Michael Hooker wrote:
Many thanks for the explanation Dan.  

Ditto the thanks.
I suspected the purpose of ROLLBACK was as you say, but couldn't see 
why it was used here.  You point out the "under the hood" difference 
between ROLLBACK and COMMIT, but what about END? My main (third-party, 
commercial) application may well have some data stored waiting to be 
fed into the database after the file has been copied, and if it is 
forced to discard its cache that presumably means these are lost, 
which wouldn't be a good idea. 


It shouldn't have to. The cache Dan was referring to was an internal 
copy of (part of) what is already in the data base. If the data base 
file has been updated, that copy has to be discarded, since it may not 
be valid--of course, it may be valid, but figuring out whether it is 
would be a lot more work than just rereading it. Anyhow, this is all 
happening at a much lower level than the application data you are 
referring to, which is still valid and should be entered into the file.


HTH,

Gerry

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Cache invalidation after insert statements.

2007-06-19 Thread Ken
1. Review your oracle 10g db and fix the "HUGE I/O" issues. 
 2. Why not do the lookups using oracle? Allocate the extra 5 gig to the oracle 
buffer cache.
 3. If you want good lookup performance, try to use the Array level interface 
so that you don't need to take multiple trips (context switch) to the DB.
 4. Use a Multi threaded approach for throughput with oracle since you have 16 
cpu's.
 

Andrew Finkenstadt <[EMAIL PROTECTED]> wrote: On 6/19/07, pompomJuice  wrote:
>
>
> Running a huge Oracle 10G database
> Running a 3rd party application that generates HUGE IO.
> Part of this 3rd party application is my application that does lookups.
>
> 1.) Data comes in in the form of files.
> 2.) 3rd party application decodes and prepares these tickets or records
> for
> insert into Oracle
> 3.) As these records flow by, we use key fields inside them to do lookups
> into this lightweight SQL database as doing it into oracle would be even
> slower.
>


When doing lookup tables in Oracle (9i or later stable versions), you are
better off doing an index-organized table to reduce by 1 seek the amount of
I/O Oracle does, and using a hash-key index-organized table to reduce the
seeks to (usually) ONLY 1.  Without knowing where in your box the Oracle
instance is, it's difficult to say for sure, but assuming its on the same
machine, I would try this out, based on my experience:

PIN the lookup table (index) into the buffer cache, giving Oracle the extra
5GB of memory space to do so.  Assuming you have a persistent connection to
Oracle from your fly-by-update process and your fly-by-lookup process, you
might find Oracle performs adequately.  Either way I'd measure what's taking
the time in doing lookups in Oracle that would be even slower;  I assume you
use bind parameters and cached SQL cursors (you get this automatically with
most PL/SQL constructs [excepting dynamic sql type 4]), and basically have 1
parse per cursor prepare at application startup time.



Re: [sqlite] Cache invalidation after insert statements.

2007-06-19 Thread Ken
Can you consolidate your multiple binaries to a Single Binary?
 Then Use threading and sqlite's shared caching to perform the Lookups and 
updates.
 That way the cache wouldn't get invalidated???
 
 Someone else here correct me if this is a bad idea!!!
 
 

pompomJuice <[EMAIL PROTECTED]> wrote: 
I suspected something like this, as it makes sense.

I have multiple binaries/different connections ( and I cannot make them
share a connection ) using this one lookup table and depending on which
connection checks first, it will update the table. 

My question is then, if any one connection makes any change to the database
( not neccesarily to the huge lookup table ) will all the other connections
invalidate their entire cache? Or is it per table/btree that the cache is
dropped?

Thanks for that reponse. Already I can move ahead now with better knowlege
of how the caching works.

Regards.



Dan Kennedy-4 wrote:
> 
> On Tue, 2007-06-19 at 01:06 -0700, pompomJuice wrote:
>> Hello there.
>> 
>> I need some insight into how SQLite's caching works. I have a database
>> that
>> is quite large (5Gb) sitting on a production server that's IO is severely
>> taxed. This causes my SQLite db to perform very poorly. Most of the time
>> my
>> application just sits there and uses about 10% of a CPU where it would
>> use a
>> 100% on test systems with idle IO. Effectively what the application does
>> is
>> constantly doing lookups as fast as it can.
>> 
>> To counteract this I increased the page size to 8192 (Unix server with
>> advfs
>> having 8K block sizes) and increased the MAX_PAGES value in sqliteInt.h
>> to
>> 512000. This worked. My application starts at low memory usage and as it
>> gradually gains more memory. As it gains more memory it uses more CPU and
>> reaches a point where it finally uses 100% CPU and 5Gb of ram.
>> 
>> Every now and then the lookup table is udpated. As soon as the
>> application
>> does this however the performance goes back to a crawl and slowly builds
>> up
>> again as described in the previous paragraph. The memory usage stays at
>> 5Gb.
>> All that I can think of is that the update invalidates the cache. 
> 
> Probably right.
> 
>> The update
>> is not very big, say 20 rows in a table that has about 45 million
>> rows.
>> 
>> What exactly is happening here?
> 
> Are you using 3.3.17? And is it an external process (or at least a
> different connection doing) doing the update?
> 
> If so, the update is modifying the pager change-counter, invalidating
> the pager cache held by the lookup application. The lookup app has
> to start loading pages from the disk again, instead of just reading
> it's cache.
> 
> The only way around this performance hit is to do the UPDATE through
> the lookup app, using the same database connection.
> 
> Dan.
> 
>> Regards.
> 
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Cache-invalidation-after-insert-statements.-tf3944908.html#a11192121
Sent from the SQLite mailing list archive at Nabble.com.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




[sqlite] More SQLite Misuse

2007-06-19 Thread Andre du Plessis
DLL version

  Sqlite3.3.17

 

The os is windows

 

After the last query of sqlite3_step

 

I decided to so some more tests, with threads, if synchronized properly,
it seems that you can use more than one thread without any problem as
long as 

Sqlite3_finalize is called is this correct?

 

Please note that this is a very simple query being executed :  "select *
from threads where id = 1"

 

Imagine in the following scenarios both threads are executing
simultaneously and will lock on the global critical section (so they are
synchronized)

Using the same DB handle.

 

Scenario 1

 

THREAD1   THREAD2

 

LockGlobalCriticalSection
LockGlobalCriticalSection

Sqlite3_prepare
Sqlite3_prepare

Sqlite3_step
Sqlite3_step<   SQLITE_MISUSE: library routine
called out of sequence here

Sqlite3_reset
Sqlite3_reset 

UnLockGlobalCriticalSection
UnLockGlobalCriticalSection

 

// The following code works fine though

 

THREAD1   THREAD2

 

LockGlobalCriticalSection
LockGlobalCriticalSection

Sqlite3_prepare
Sqlite3_prepare

Sqlite3_step
Sqlite3_step

Sqlite3_finalize
Sqlite3_finalize 

UnLockGlobalCriticalSection
UnLockGlobalCriticalSection

 

 

If my tests are correct it is not possible to retain a prepared
statement across threads. And has to be reprepared each time ??

 

 

 

 

 

 

 

 

 



Re: [sqlite] Tcl and BLOB data

2007-06-19 Thread drh
"Jeff Godfrey" <[EMAIL PROTECTED]> wrote:
> 
> Is there a way I can "force" SQLite to insert my data as a BLOB, even if the 
> containing variable has a string representation?  I tried to CAST the data to 
> a BLOB during the insert, but the results were the same.
> 

If you get the patch I just checked in

  http://www.sqlite.org/cvstrac/chngview?cn=4092

and if you use a "@" character instead of "$" in front of the
variable name, and the variable has a bytearray representation,
then the variable will be bound as a BLOB even if the variable
also has a text representation.

--
D. Richard Hipp <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Tcl and BLOB data

2007-06-19 Thread Jeff Godfrey
Hi All,

I have an interesting Tcl / SQLite BLOB issue going on that I hope you can help 
with...

I have binary data stored in a Tcl variable that I'm stuffing into SQLite BLOB 
field.  The resulting data in the BLOB field is corrupted.  After much 
experimentation, and help from the folks on comp.lang.tcl, it seems that the 
issue is related to whether or not the Tcl variable in question contains a 
string representation at the time it's inserted into the BLOB field.

If it does contain a string representation, the inserted data is incorrect.  If 
it does not contain a string representation, the inserted data is correct.  Not 
wanting to get into a "Tcl Internals" discussion here, my question is this:

Is there a way I can "force" SQLite to insert my data as a BLOB, even if the 
containing variable has a string representation?  I tried to CAST the data to a 
BLOB during the insert, but the results were the same.

I have several potential solutions from the Tcl side, but wanted to explore the 
possibilities from this side also.

Thanks for any input.

Jeff Godfrey

[sqlite] Database Design

2007-06-19 Thread John Stanton

Just some observations after reading mail on this forum.

It is wise when using Sqlite to design your database and application to 
work to the strengths, not weaknesses of Sqlite.  Be aware that Sqlite 
only uses one index per table in a query and that it's cache is tied to 
a connection.  If you observe those constraints you get a good result - 
an easy to implement and maintain database with a snappy performance


If you have a legacy application which violates those constraints or an 
envisaged application which cannot comply, then you should look to using 
something like PostgreSQL.  When you need a big hammer, don't pick up a 
Lite one.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Update of multiple columns

2007-06-19 Thread Dennis Cote

T&B wrote:
In case the original question has been clouded by discussions of 
running a general store and the arson habits of the Hand Crafted 
Guild, allow me to crystalize my purely SQLite question:


I know I can update via:

  update Table1
set
  c1 = (select d1 from Table2 where Table2.id = desired_id)
, c2 = (select d2 from Table2 where Table2.id = desired_id)
, c3 = (select d3 from Table2 where Table2.id = desired_id)
, cn = (select dn from Table2 where Table2.id = desired_id)
  where
rowid = desired_rowid

But that executes the same where clause n times, so scans through 
Table2 for a to find the same matching row n times.


Is it possible to construct an update that executes a where clause 
once to locate all of the desired columns?


Thanks,
Tom

Tom,

What you need is a select that returns all the relevant data from table2 
(your current product info table) and the existing data that was 
inserted as the new records was added to your sales history table. This 
can be done using a replace command instead of an update in a after 
insert trigger.


Using the schema name from earlier posts, I would suggest something like 
this.


   CREATE TABLE products (
   product_id INTEGER PRIMARY KEY,
   buy REAL,
   sell REAL,
   desc TEXT
   );

   CREATE TABLE sales (
   sale_id INTEGER PRIMARY KEY,
   customer_id INTEGER   -- FK to customes table
   );

   CREATE TABLE sale_products (
   sale_id INTEGER,  -- FK to sales table
   product_id INTEGER,   -- FK to products table
   buy REAL,
   sell REAL,
   desc TEXT
   );

   create trigger copy_product_info
   after insert on sale_products
   begin
   replace into sale_products(sale_id, product_id, buy, sell, desc)
   select s.sales_id, null, p.buy, p.sell, p.desc
   from sale_products as s
   join products as p on s.product_id = p.product_id
   where s.sale_id = new.id;
   end;

   -- insert records for a single sale
   insert into sales values(1001, 2001);
   insert into sale_products(1001, 15083);
   insert into sale_products(1001, 23069);
   insert into sale_products(1001, 25655);

This uses random data, but I hope you get the idea. Th etriger copies 
all the required fields from the product table at the time the record is 
inserted into the history table. If you copy everything of interest, you 
should never need to refer to the product table again, and changes to 
the product table won't affect your sales history.


Note, that the product_id that is stored in the sale_product table 
should not be need after the trigger has executed, so it is replace by a 
null to ensure that it can't inadvertently be used to link to the 
product table and pull out data that may have changed since the history 
record was created. If you wan to maintain the product_id value, you 
should change the null in the trigger's select to s.product_id


HTH
Dennis Cote



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Cache invalidation after insert statements.

2007-06-19 Thread Joe Wilson
Does every single process (however insignificant) that reads or writes 
to that sqlite database file run on the same 16 processor machine?

> 16 Processor machine
> ~40Gb ram
> EMC storage
> Running a huge Oracle 10G database
> Running a 3rd party application that generates HUGE IO. 
> Part of this 3rd party application is my application that does lookups.


  
___
You snooze, you lose. Get messages ASAP with AutoCheck
in the all-new Yahoo! Mail Beta.
http://advision.webevents.yahoo.com/mailbeta/newmail_html.html

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Custom Aggregation Not Working

2007-06-19 Thread John Stanton
We have implemented a decimal arithmetic module to handle money in 
Sqlite.  It uses the regular SQL definitions and maintains precison and 
scale.  The data are stored as TEXT and in "display format", right 
aligned with leading spaces so that they display without editing or 
radix transformation when inserting into HTML pages or reports.  The 
arithmetic maintains the fixed point precision and applies correct 
rounding using the classic algorithm which minimizes skew.


Sqlite tries to torpedo these types by wanting to transform decimal 
numbers into integers or floating point.  Some care is needed at the 
interface point to stop that happening.


The alternative way to store money in Sqlite is to use integers with an 
implied decimal point and perform appropriate rounding and point 
position maintenance when performing multiplication and division.  This 
is just like the traditional COMP3 BCD integer usage.


Using floating point for money in exacting applications is a trap for 
young players.  Once bitten, twice shy.


wcmadness wrote:
Hey, Folks: 


I'm writing a financial application and MUST have exact math decimals (no
floats). So, I'm using Python's decimal module. 


My database is Sqlite (and my language is Python with Pysqlite); Sqlite
doesn't offer a non-floating point decimal type. But, it does have adapters
and converters to store data as a native Sqlite type (string / text) in the
database and bring it out of the database and into memory as something else
(Python Decimal, in my case). That works great, but it does NOT seem to
apply to aggregation operations. I need it to. 


So, I tried using another Sqlite feature, custom aggregation functions, but
to no avail. Does anyone know how to fix this? What am I doing wrong? 


I am submitting all of my test code, below. Following that, I am submitting
my results. 

Thanks for your time in helping with this! 

Here's the code: 

import sqlite3 
import decimal 


# This way will store the value as float, potentially losing
precision. 

print '-' * 25 
print 'Testing native data types, no adapters / converters.' 
con = sqlite3.connect('test1.db') 
cur = con.cursor() 
cur.execute("create table test (pkey integer primary key,somenumber
Decimal);") 
cur.execute("insert into test values (null,.1);") 
cur.execute("insert into test values (null,.2);") 
cur.execute("select * from test;") 
rows = cur.fetchall() 
for row in rows: 
print row[0], type(row[0]) 
print row[1], type(row[1]) 
cur.close() 
con.close() 
print '-' * 25 

# This way will store the value as decimal, keeping exact precision. 

def AdaptDecimal(pdecValue): 
return str(pdecValue) 
def ConvertDecimal(pstrValue): 
return decimal.Decimal(pstrValue) 

decimal.getcontext().precision = 50 
sqlite3.register_adapter(decimal.Decimal, AdaptDecimal) 
sqlite3.register_converter("Decimal", ConvertDecimal) 


print 'Testing data type with adapters / converters. Decimal numbers should
be Python Decimal types.' 
con = sqlite3.connect('test2.db',detect_types = sqlite3.PARSE_DECLTYPES) 
cur = con.cursor() 
cur.execute("create table test (pkey integer primary key,somenumber
Decimal);") 
cur.execute("insert into test values (null,.1);") 
cur.execute("insert into test values (null,.2);") 
cur.execute("select * from test;") 
rows = cur.fetchall() 
for row in rows: 
print row[0], type(row[0]) 
print row[1], type(row[1]) 
cur.close() 
con.close() 
print '-' * 25 

# OK. That works. Now for the real test. Let's try an equality test. 
# Classic float equality failure .1 + .1... 10 times should NOT
equal 1. 
# As predicted, this will FAIL the equality test 


print 'Testing Sum aggregation on native data types. Should be float and
should fail equality test.' 
con = sqlite3.connect('test3.db') 
cur = con.cursor() 
cur.execute("create table test (pkey integer primary key,somenumber
Decimal);") 
for x in range(10): 
cur.execute("insert into test values (null,.1);") 
cur.execute("select sum(somenumber) as total from test;") 
rows = cur.fetchall() 
print rows[0][0], type(rows[0][0]) 
if rows[0][0] == 1: 
print 'equal' 
else: 
print 'NOT equal' 
cur.close() 
con.close() 
print '-' * 25 


# Now, we try the exact same equality test, using adapters and
converters, substituting 
# the Python exact precision decimal type for float. 


# Probably don't need to re-register. We did that above. We probably just
need to parse declared types when 
# we open the connection. 
# H... This fails whether I re-register or not. 
# sqlite3.register_adapter(decimal.Decimal, AdaptDecimal) 
# sqlite3.register_converter("Decimal", ConvertDecimal) 
print "Testing Sum aggregation with adapters / converters registered. Result

SHOULD BE Python Decimal type, but is NOT. Should PASS equality test, but
doesn't." 
con = sqlite3.connect('test4.db',detect_types = sqlite3.PARSE_DECLTYPES) 
cur = con.cursor() 
cur.execute("create table test (pkey integer primary key,somenumber
Decimal);") 
for x

Re: [sqlite] Step Query

2007-06-19 Thread Andrew Finkenstadt

On 6/19/07, Andre du Plessis <[EMAIL PROTECTED]> wrote:


I had lots of problems here when starting with SQLite and painstaking I
think I've figured it out.



Me too.

You have sqlite3_prepare, which compiles the sql into byte code, then

Sqlite3_step to execute the query or update, if it is an update then
there is no row, if query then call step until no more rows.

Once done stepping you must either sqlite3_reset or sqlite3_finalize
I believe that the finalize will do reset and free resources. Reset is
designed to reuse the query or update.



Correct.  I ended up writing a (multi-thread aware) C++ framework to keep me
out of trouble.  In the SQLite namespace I have

 class exception;
 class database;
 class connection;
 class statement;
 class execution;
 class query_result;

where the ownership model is well-defined, and the data-use paths are
protected from coding mistakes at compile time.  There can be only one
execution attached to a statement at any one time, and the query result is
owned by the execution.  When the execution terminates (goes out of scope),
the statement is reset automatically.

It is important though to reset (if you don't finalize) because if you

don't reset you may have an open lock on the table and this will lock
out other processes and they will get a SQLITE_BUSY error, because
depending on what the sql is doing, it may have a cursor which may lock
the table.



Exactly the result I had.  Since I am a strong believer in "prepare once,
use many" for performance reasons, I ended up having to write my own
framework to keep me out of trouble, and to reduce the amount of "busy work"
around the "C" interface to sqlite.

--a


Re: [sqlite] Recommend server for Windows?

2007-06-19 Thread Clay Dowling

Gilles Ganault wrote:

> We'd really like to stick to SQLite because it's very easy to set up, and
> most of our customers don't have anyone technical around to help them set
> up a DBMS server.

I'm going to recommend PostgreSQL.  It's very easy to install from your
application's installer and quite simple to administer.  The supporting
utilities are also of excellent quality, so that it's pretty simple to set
up a shortcut that would let your customers do a backup.

Clay Dowling
-- 
Simple Content Management
http://www.ceamus.com


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Cache invalidation after insert statements.

2007-06-19 Thread pompomJuice

I hear you.

If nothing else works then I will have to switch to using Oracle. it would
require a substancial code rewrite though. sigh.

Thanks for the adice.


Andrew Finkenstadt wrote:
> 
> On 6/19/07, pompomJuice <[EMAIL PROTECTED]> wrote:
>>
>>
>> Running a huge Oracle 10G database
>> Running a 3rd party application that generates HUGE IO.
>> Part of this 3rd party application is my application that does lookups.
>>
>> 1.) Data comes in in the form of files.
>> 2.) 3rd party application decodes and prepares these tickets or records
>> for
>> insert into Oracle
>> 3.) As these records flow by, we use key fields inside them to do lookups
>> into this lightweight SQL database as doing it into oracle would be even
>> slower.
>>
> 
> 
> When doing lookup tables in Oracle (9i or later stable versions), you are
> better off doing an index-organized table to reduce by 1 seek the amount
> of
> I/O Oracle does, and using a hash-key index-organized table to reduce the
> seeks to (usually) ONLY 1.  Without knowing where in your box the Oracle
> instance is, it's difficult to say for sure, but assuming its on the same
> machine, I would try this out, based on my experience:
> 
> PIN the lookup table (index) into the buffer cache, giving Oracle the
> extra
> 5GB of memory space to do so.  Assuming you have a persistent connection
> to
> Oracle from your fly-by-update process and your fly-by-lookup process, you
> might find Oracle performs adequately.  Either way I'd measure what's
> taking
> the time in doing lookups in Oracle that would be even slower;  I assume
> you
> use bind parameters and cached SQL cursors (you get this automatically
> with
> most PL/SQL constructs [excepting dynamic sql type 4]), and basically have
> 1
> parse per cursor prepare at application startup time.
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Cache-invalidation-after-insert-statements.-tf3944908.html#a11195405
Sent from the SQLite mailing list archive at Nabble.com.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Recommend server for Windows?

2007-06-19 Thread Gilles Ganault

Hello

	Until now, our users were mostly single-hosts, and the few who did share 
an SQLite database through a shared drive had a small database and very low 
concurrency. But a couple of new ones have a DB that's about 50MB, running 
on a 10Mbps LAN... and an INSERT takes about 10 seconds :-/


So, I'd like some feedback on the SQLite servers that I found, or other 
pratical solution I should know about:


http://users.libero.it/irwin/
http://www.oneledger.co.uk/sql4sockets.html
http://sqlitedbms.sourceforge.net/index.htm
http://www.terrainformatica.com/sqlitedbserver/
dhRPCServer + dhSQLite-COM-Wrapper http://www.thecommon.net/8.html
http://www.it77.de/sqlite/sqlite_sockets.htm

We'd really like to stick to SQLite because it's very easy to set up, and 
most of our customers don't have anyone technical around to help them set 
up a DBMS server.


Thank you
G.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Proper way to transfer a live sqlite database

2007-06-19 Thread Michael Hooker
Many thanks for the explanation Dan.  I suspected the purpose of ROLLBACK 
was as you say, but couldn't see why it was used here.  You point out the 
"under the hood" difference between ROLLBACK and COMMIT, but what about END? 
My main (third-party, commercial) application may well have some data stored 
waiting to be fed into the database after the file has been copied, and if 
it is forced to discard its cache that presumably means these are lost, 
which wouldn't be a good idea.   I'm not entirely sure that the application 
goes about storing data in a sensible fashion anyway, as sometimes the 
snapshot it is supposed to give does not include items I know were received 
into the system hours ago - while at other times things appear instantly.


So, calling the Windows CopyFile function from Delphi as I proposed counts 
as copying "at the OS level", does it?


Thanks

Michael Hooker
co-list-owner Original Gatwick Spotters List & Gatwick AvPhotos
http://www.AvPhotosOnline.org.uk
- Original Message - 
From: "Dan Kennedy" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, June 19, 2007 4:55 AM
Subject: Re: [sqlite] Proper way to transfer a live sqlite database



On Tue, 2007-06-19 at 00:46 +0100, Michael Hooker wrote:

Christian wrote:

>>Best way of doing this is to execute a 'BEGIN IMMEDIATE', copying the
database file, then executing a 'ROLLBACK' to end the transaction.<<

>>and can be safely copied at the OS level<<

I also have a need to copy a live database which is constantly being 
updated
24/7 by software which decodes radio signals.  I need to safely 
interrogate

and process a static version of the data without risk of messing up the
original. Once midnight has passed, the copy can be taken and the 
previous
day's data extracted from the copy.But as a raw beginner I don't 
clearly

understand what is being said here.

When you say >>and can be safely copied at the OS level<<, I guess you 
must
mean something more than right-click the file icon and select "Copy" ;) 
In

any case I would much prefer the copy to be taken programmatically, which
would be neater and faster, and could be done automatically in the middle 
of
the night when the data flow is much less intense.   I use, as best I 
can,

Delphi 7 and Ralf Junker's DisqLite3, so can I safely:-

(1) send a BEGIN IMMEDIATE command,

(2) issue Delphi Windows API command "CopyFile(PChar(SourceFileName),
PChar(DestFileName), FALSE);

(3) send a ROLLBACK command.

The destination would be the same folder as the source, so no network 
delay.

The file is about 55 Megabytes.

My data is of no commercial value, but I have a few hundred people 
looking

forward to my reports every day and don't want to mess it up;  I have no
other SQLite3 databases to experiment with, so please forgive me for 
asking
you experts what is probably a very basic question.  It would also be 
very
helpful if someone could explain in jargon-free terms what ROLLBACK means 
in

this context and why it apparently serves the purpose of finishing the
transaction which has not attempted to change anything(why not END?)


"ROLLBACK" means abandon the current transaction, and put the database
back the way it was before the transaction started. To "roll back" all
changes so far.

In this specific context, the important part is that the "BEGIN
IMMEDIATE" locks the database file and the "ROLLBACK" releases the
lock. A "COMMIT" or "END" would be logically identical - it releases
the lock, and since there were no database changes made in this
transaction, it doesn't matter if they are rolled back or not.

Under the hood, there is a minor difference - a COMMIT will update
the database change-counter, meaning that all other connections
will need to discard their caches. A ROLLBACK does not update the
change-counter, so caches held by other connections will remain
valid.

Dan.



 - I
keep coming across the word and I'm sure it means something fairly 
simple,
but I have not encountered it until I started looking at SQLite.  I've 
let

Delphi and VisualDB handle all my database work through the BDE until now
and never had any need to worry about locking or contentions.

Thanks

Michael Hooker

- Original Message - 
From: "Christian Smith" <[EMAIL PROTECTED]>

To: 
Sent: Monday, June 18, 2007 6:39 PM
Subject: Re: [sqlite] Proper way to transfer a live sqlite database


> Rich Rattanni uttered:
>
>> The databases will be in flux, and I didnt necessairly want to suspend
>> the application that is performs reads and writes into the database.
>> A simple copy worries me because it seems like messing with SQLITE on
>> the file level is dangerous since you circumvent all the protection
>> mechanisms that provide fault tolerance.  I didnt want to have to
>> worry about if the database has a journal file that needs copied, or
>> any other situation like that.  I figured using the SQLITE API to do
>> the copy would award me some protection against corruption.
>
>
> You're right to be cautiou

Re: [sqlite] Cache invalidation after insert statements.

2007-06-19 Thread Andrew Finkenstadt

On 6/19/07, pompomJuice <[EMAIL PROTECTED]> wrote:



Running a huge Oracle 10G database
Running a 3rd party application that generates HUGE IO.
Part of this 3rd party application is my application that does lookups.

1.) Data comes in in the form of files.
2.) 3rd party application decodes and prepares these tickets or records
for
insert into Oracle
3.) As these records flow by, we use key fields inside them to do lookups
into this lightweight SQL database as doing it into oracle would be even
slower.




When doing lookup tables in Oracle (9i or later stable versions), you are
better off doing an index-organized table to reduce by 1 seek the amount of
I/O Oracle does, and using a hash-key index-organized table to reduce the
seeks to (usually) ONLY 1.  Without knowing where in your box the Oracle
instance is, it's difficult to say for sure, but assuming its on the same
machine, I would try this out, based on my experience:

PIN the lookup table (index) into the buffer cache, giving Oracle the extra
5GB of memory space to do so.  Assuming you have a persistent connection to
Oracle from your fly-by-update process and your fly-by-lookup process, you
might find Oracle performs adequately.  Either way I'd measure what's taking
the time in doing lookups in Oracle that would be even slower;  I assume you
use bind parameters and cached SQL cursors (you get this automatically with
most PL/SQL constructs [excepting dynamic sql type 4]), and basically have 1
parse per cursor prepare at application startup time.


Re: [sqlite] Cache invalidation after insert statements.

2007-06-19 Thread pompomJuice

Thats exactly why I thought this sqlite would work.

16 Processor machine
~40Gb ram
EMC storage
Running a huge Oracle 10G database
Running a 3rd party application that generates HUGE IO. 
Part of this 3rd party application is my application that does lookups.

1.) Data comes in in the form of files.
2.) 3rd party application decodes and prepares these tickets or records for
insert into Oracle
3.) As these records flow by, we use key fields inside them to do lookups
into this lightweight SQL database as doing it into oracle would be even
slower.

When I deployed my solution I initially set the cache size to very small as
I thought that we can make gains by rather having the OS cache the SQLite db
file. This strategy failed miserably. The 3rd party app generates so much IO
that it seems that my SQLite file simply has no chance of staying cached.

So I then modified my program to mess with PRAGMA cache size to see if that
would help. Initially it did nothing until I edited the
SQLITE_DEFAULT_CACHE_SIZE in sqliteInt.h (which I see is now in limits) and
only then would my program start consuming massive amounts of ram. That work
brilliantly for a while until the lookup table was updated. All programs
went dead slow again and it would take them a good 2 hours to fully cache
that 5GB file again.

I am panicking now as I have been working on this project for a few months
now and its seems to be doomed. The only course of action I see is finding a
way to somehow to increase the IO performance as that is where I believe the
problem resides. Copying to and from the file system with the DB file is
very fast so I am kind of at a loss why my application is not getting the
throughput. Maybe because its not accessing the file sequentially... I don’t
know.

Thanks for the help.

Regards.



Christian Smith-4 wrote:
> 
> pompomJuice uttered:
> 
>>
>> I suspected something like this, as it makes sense.
>>
>> I have multiple binaries/different connections ( and I cannot make them
>> share a connection ) using this one lookup table and depending on which
>> connection checks first, it will update the table.
> 
> 
> What is your working set like? Are all processes on the same machine?
> 
> Sounds like you might benefit from increasing the amount of RAM on this 
> machine. You may be thrashing the OS cache, as your lookup process hogs 
> memory for it's own cache, pushing out old pages from the OS filesystem 
> cache.
> 
> If RAM upgrade is not feasible, then try reducing the cache of the lookup 
> process, so that the OS cache isn't forced out of memory so easily. Then, 
> when the lookup process has a cache miss, it's missed page is more likely 
> to be in the OS memory cache, and copied to the lookup process at memory 
> copy speed.
> 
> As you may have guessed, choosing the correct cache size for the lookup 
> process may involve several tuning iterations.
> 
> 
> Christian
> 
> --
>  /"\
>  \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
>   X   - AGAINST MS ATTACHMENTS
>  / \
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Cache-invalidation-after-insert-statements.-tf3944908.html#a11193389
Sent from the SQLite mailing list archive at Nabble.com.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Need help linking into Delphi Application

2007-06-19 Thread John Elrick

John Elrick wrote:
I've been using the Delphi ASGSqlite components with static linking 
for some time with version 3.3.13.  I'd like to move on up to 3.4.0, 
however, no one seems to have documented how to do this yet.


I tried compiling the Amalgamation with Borland C++ 5.0 and it 
generates the obj file nicely.  However, when I attempt to link the 
obj into my application, I am getting an "unsatisfied forward 
declaration __streams".


After playing around some more, I stumbled across the problem and 
solution.  In case anyone else attempts this, you must add a second link 
to streams.obj. 

I'll post this to the Aducom forum also along with greater detail as to 
where it needs to go.



John

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Re: Custom Aggregation Not Working

2007-06-19 Thread Igor Tandetnik

wcmadness <[EMAIL PROTECTED]> wrote:

I'm writing a financial application and MUST have exact math decimals
(no floats). So, I'm using Python's decimal module.

My database is Sqlite (and my language is Python with Pysqlite);
Sqlite doesn't offer a non-floating point decimal type. But, it does
have adapters and converters to store data as a native Sqlite type
(string / text) in the database


Personally, I prefer storing monetary values in the database as scaled 
integers. Say, store all values in 1 millionth of a dollar units, so one 
dollar is 100, and one cent is 1. Most math works on this 
representation out of the box.


Igor Tandetnik 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Cache invalidation after insert statements.

2007-06-19 Thread Christian Smith

pompomJuice uttered:



I suspected something like this, as it makes sense.

I have multiple binaries/different connections ( and I cannot make them
share a connection ) using this one lookup table and depending on which
connection checks first, it will update the table.



What is your working set like? Are all processes on the same machine?

Sounds like you might benefit from increasing the amount of RAM on this 
machine. You may be thrashing the OS cache, as your lookup process hogs 
memory for it's own cache, pushing out old pages from the OS filesystem 
cache.


If RAM upgrade is not feasible, then try reducing the cache of the lookup 
process, so that the OS cache isn't forced out of memory so easily. Then, 
when the lookup process has a cache miss, it's missed page is more likely 
to be in the OS memory cache, and copied to the lookup process at memory 
copy speed.


As you may have guessed, choosing the correct cache size for the lookup 
process may involve several tuning iterations.



Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Lemon Parser - Modular & Extensible ?

2007-06-19 Thread Christian Smith

Uma Krishnan uttered:

Hey, There's no need to be offensive. I did not mean to be critical. Far 
from it, it does a great a job (far more than I'm capable of producing). 
What I was trying to find out was, if it is possible for a .y files to 
be broken such that it can be built on top on other .y files.



Sorry if I came across as offensive. That was not the intention. I was 
just a little confused about the question.


I think lemon can only handle single input files. But you can can include 
C source into your output C file using the %include directive. Check out 
the documentation at, if you haven't already done so:

http://www.hwaci.com/sw/lemon/lemon.html

Now, what may draw some critical analysis is top posting and hijacking an 
existing thread for a new topic... [snip]





 Not sure if this is the right group. But could not find a lemon parser 
user group.



This is the best group to ask. While not tied to SQLite, it appears to be 
maintained as part of SQLite (but I may be wrong.)


Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Cache invalidation after insert statements.

2007-06-19 Thread Dan Kennedy

> My question is then, if any one connection makes any change to the database
> ( not neccesarily to the huge lookup table ) will all the other connections
> invalidate their entire cache?

Yes. The entire cache, regardless of what table was modified etc.


Dan.



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Cache invalidation after insert statements.

2007-06-19 Thread pompomJuice

I suspected something like this, as it makes sense.

I have multiple binaries/different connections ( and I cannot make them
share a connection ) using this one lookup table and depending on which
connection checks first, it will update the table. 

My question is then, if any one connection makes any change to the database
( not neccesarily to the huge lookup table ) will all the other connections
invalidate their entire cache? Or is it per table/btree that the cache is
dropped?

Thanks for that reponse. Already I can move ahead now with better knowlege
of how the caching works.

Regards.



Dan Kennedy-4 wrote:
> 
> On Tue, 2007-06-19 at 01:06 -0700, pompomJuice wrote:
>> Hello there.
>> 
>> I need some insight into how SQLite's caching works. I have a database
>> that
>> is quite large (5Gb) sitting on a production server that's IO is severely
>> taxed. This causes my SQLite db to perform very poorly. Most of the time
>> my
>> application just sits there and uses about 10% of a CPU where it would
>> use a
>> 100% on test systems with idle IO. Effectively what the application does
>> is
>> constantly doing lookups as fast as it can.
>> 
>> To counteract this I increased the page size to 8192 (Unix server with
>> advfs
>> having 8K block sizes) and increased the MAX_PAGES value in sqliteInt.h
>> to
>> 512000. This worked. My application starts at low memory usage and as it
>> gradually gains more memory. As it gains more memory it uses more CPU and
>> reaches a point where it finally uses 100% CPU and 5Gb of ram.
>> 
>> Every now and then the lookup table is udpated. As soon as the
>> application
>> does this however the performance goes back to a crawl and slowly builds
>> up
>> again as described in the previous paragraph. The memory usage stays at
>> 5Gb.
>> All that I can think of is that the update invalidates the cache. 
> 
> Probably right.
> 
>> The update
>> is not very big, say 20 rows in a table that has about 45 million
>> rows.
>> 
>> What exactly is happening here?
> 
> Are you using 3.3.17? And is it an external process (or at least a
> different connection doing) doing the update?
> 
> If so, the update is modifying the pager change-counter, invalidating
> the pager cache held by the lookup application. The lookup app has
> to start loading pages from the disk again, instead of just reading
> it's cache.
> 
> The only way around this performance hit is to do the UPDATE through
> the lookup app, using the same database connection.
> 
> Dan.
> 
>> Regards.
> 
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Cache-invalidation-after-insert-statements.-tf3944908.html#a11192121
Sent from the SQLite mailing list archive at Nabble.com.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Custom Aggregation Not Working

2007-06-19 Thread Martin Jenkins

wcmadness wrote:
Hey, Folks: 


I'm writing a financial application and MUST have exact math decimals (no
floats). So, I'm using Python's decimal module. 


My database is Sqlite (and my language is Python with Pysqlite); Sqlite
doesn't offer a non-floating point decimal type. But, it does have adapters
and converters to store data as a native Sqlite type (string / text) in the
database and bring it out of the database and into memory as something else
(Python Decimal, in my case). That works great, but it does NOT seem to
apply to aggregation operations. I need it to. 


Hi,

Firstly, you might get better answers on the pysqlite mailing list at 
http://lists.initd.org/mailman/listinfo/pysqlite than here.


I've not used custom types/converters etc and couldn't see anything 
wrong with your code at first glance so here are a couple of "off the 
top of my head" comments.


In your tests you insert values as hard coded strings rather than bound 
variables. From looking at the code it wasn't obvious (to me) that the 
converters/adapters for the decimal classes will be called. I think 
you've assumed pysqlite will examine the column declaration and call the 
appropriate converter/adapter regardless, but the test4.db case doesn't 
bear this out.


I put prints into the converter and it was NOT called with an insert like:

  for x in range(10):
cur.execute("insert into test(somenumber) values (0.1)")

but it WAS called when I changed the insert to:

  d=decimal.Decimal("0.1")
  for x in range(10):
cur.execute("insert into test(somenumber) values (?)", (d,))

Going in the other direction, section 4.3 of the pysqlite docs at 
http://initd.org/pub/software/pysqlite/doc/usage-guide.html warns:


"!!! Note that converter functions always get called with a string, no 
matter under which data type you sent the value to SQLite !!!"


You've defined string <-> decimal conversions, but when you execute:

cur.execute("select sum(somenumber) as total from test;")

how does your adapter get called?

sum() won't return your custom type and there's no column info that I 
can see that would cause your adapter to be called. The column names 
example implies you could force the result of sum() be adapted but sum() 
won't perform addition according to the semantics of your custom type.


HTH

Martin

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] How do I close the command line window

2007-06-19 Thread JDoggen
Hello

I'm doing a bulk insert by calling
sqlite3 -init BulkinsertItems.sql mydatabasefile

with the BulkinsertItems.sql file containing:

.separator \t
.import BulkItems.txt items
.quit

The command window opens and the import works, but then it does not close 
again.
How can I have this clsoe automatically?

Thanks
Jan Doggen
=
This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the sender by return e-mail 
and delete this message from your system. This message contains confidential 
information and is intended only for the individual named. If you are not the 
named addressee you should not disseminate, distribute or copy this e-mail. 
This message has been scanned for viruses by Norman Virus Control.
NOTE! Norman has 37 100% awards to date and that makes it the 2nd most awarded 
virus scan (check www.virusbtn.com for more details).
=



Re: [sqlite] problems with .dump procedure

2007-06-19 Thread Dan Kennedy
On Tue, 2007-06-19 at 09:42 +0200, Roberto Davico wrote:
> Hi all,
> 
>I am Roberto and I find a problem using sqlite3 (version: SQLite 
> version 3.3.8).
> 
>I make a sample db to explain the situation:
> 
>1) Create the sample database using command line tool:
> 
> sqlite> .schema
> CREATE TABLE LOG (ID INTEGER PRIMARY KEY AUTOINCREMENT, MSG TEXT );
> CREATE TABLE USERS (ID INTEGER PRIMARY KEY AUTOINCREMENT, NAME TEXT, 
> SURNAME TEXT);
> CREATE VIEW USR AS SELECT NAME FROM USERS;
> CREATE TRIGGER USR_DELETE AFTER DELETE ON USERS BEGIN INSERT INTO LOG 
> (ID, MSG) VALUES( NULL, 'Cancellato utente'); END;
> 
> 
>2) Export with .dump command procedure
> 
> sqlite> .output dumpo.txt
> sqlite> .dump
> sqlite> .output stdout
> sqlite> .quit
> 
> 
>3) Analyze the dump file:
> 
> ~$ cat dumpo.txt
> BEGIN TRANSACTION;
> CREATE TABLE USERS (ID INTEGER PRIMARY KEY AUTOINCREMENT, NAME TEXT, 
> SURNAME TEXT);
> DELETE FROM sqlite_sequence;
> CREATE TABLE LOG (ID INTEGER PRIMARY KEY AUTOINCREMENT, MSG TEXT );
> COMMIT;
> 
> where are my view and trigger?
> 
> Is it a problem on my handwork or it isn't possible to dump views or 
> triggers? are there many settings to do before dump?

It's an old bug:

  http://www.sqlite.org/cvstrac/tktview?tn=2044

You'll have to upgrade the sqlite version I think.

Dan.


> thanks all for any advice...
> 
> ciao
> Roberto
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
> 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Cache invalidation after insert statements.

2007-06-19 Thread Dan Kennedy
On Tue, 2007-06-19 at 01:06 -0700, pompomJuice wrote:
> Hello there.
> 
> I need some insight into how SQLite's caching works. I have a database that
> is quite large (5Gb) sitting on a production server that's IO is severely
> taxed. This causes my SQLite db to perform very poorly. Most of the time my
> application just sits there and uses about 10% of a CPU where it would use a
> 100% on test systems with idle IO. Effectively what the application does is
> constantly doing lookups as fast as it can.
> 
> To counteract this I increased the page size to 8192 (Unix server with advfs
> having 8K block sizes) and increased the MAX_PAGES value in sqliteInt.h to
> 512000. This worked. My application starts at low memory usage and as it
> gradually gains more memory. As it gains more memory it uses more CPU and
> reaches a point where it finally uses 100% CPU and 5Gb of ram.
> 
> Every now and then the lookup table is udpated. As soon as the application
> does this however the performance goes back to a crawl and slowly builds up
> again as described in the previous paragraph. The memory usage stays at 5Gb.
> All that I can think of is that the update invalidates the cache. 

Probably right.

> The update
> is not very big, say 20 rows in a table that has about 45 million rows.
> 
> What exactly is happening here?

Are you using 3.3.17? And is it an external process (or at least a
different connection doing) doing the update?

If so, the update is modifying the pager change-counter, invalidating
the pager cache held by the lookup application. The lookup app has
to start loading pages from the disk again, instead of just reading
it's cache.

The only way around this performance hit is to do the UPDATE through
the lookup app, using the same database connection.

Dan.

> Regards.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Step Query

2007-06-19 Thread Andre du Plessis
I had lots of problems here when starting with SQLite and painstaking I
think I've figured it out.

You have sqlite3_prepare, which compiles the sql into byte code, then 
Sqlite3_step to execute the query or update, if it is an update then
there is no row, if query then call step until no more rows. 

Once done stepping you must either sqlite3_reset or sqlite3_finalize
I believe that the finalize will do reset and free resources. Reset is
designed to reuse the query or update.

It is important though to reset (if you don't finalize) because if you
don't reset you may have an open lock on the table and this will lock
out other processes and they will get a SQLITE_BUSY error, because
depending on what the sql is doing, it may have a cursor which may lock
the table.

So your code is fine.
But at the end of your rows you don't have to call finalize but you must
call reset.
You don't have to call finalize right away but maybe on object
destruction, to free the resources, after reset is called you wont have
a lock on the table anymore.

This is how I understand things but would like for someone to tell me if
I'm wrong.


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
anand chugh
Sent: 19 June 2007 07:28 AM
To: sqlite-users@sqlite.org
Subject: [sqlite] Step Query

Hi

I am having code like this:

   rc = sqlite3_prepare(db, zSql, -1, &pStmt, 0);
   if( rc!=SQLITE_OK ){
 return rc;
   }
   sqlite3_bind_text(pStmt, 1, zKey, -1, SQLITE_STATIC);
   sqlite3_bind_blob(pStmt, 2, zBlob, nBlob, SQLITE_STATIC);

   while( sqlite3_step(pStmt)==SQLITE_ROW )
 {
 *pnBlob = sqlite3_column_bytes(pStmt, 0);
 *pzBlob = (unsigned char *)malloc(*pnBlob);
 memcpy(*pzBlob, sqlite3_column_blob(pStmt, 0), *pnBlob);
   }

  sqlite3_finalize(pStmt);

My question here is do I need to do sqlite3_finalize(pStmt); after
every sqlite3_step() to free all memory allocated by
sqlite3_step().Does calling finalize at end will free all memory
allocated by all steps statements?

 Example shown http://www.sqlite.org/cvstrac/wiki?p=BlobExample does
same , it calls finalize after  every step.

My Program shows some Memory Leaks(Virtual Bytes).

Please clarify.

Anand


-
To unsubscribe, send email to [EMAIL PROTECTED]

-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Sorry about the multiple threads...

2007-06-19 Thread pompomJuice

When I pressed post message it told me "read error" Connection reset or
something and after the third time I thought I'd restart my browser only to
see that it did post 3 times!.


-- 
View this message in context: 
http://www.nabble.com/Sorry-about-the-multiple-threads...-tf3944931.html#a11190427
Sent from the SQLite mailing list archive at Nabble.com.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] SQLite cache question.

2007-06-19 Thread pompomJuice

Hello there.

I need some insight into how SQLite's caching works. I have a database that
is quite large (5Gb) sitting on a production server that's IO is severely
taxed. This causes my SQLite db to perform very poorly. Most of the time my
application just sits there and uses about 10% of a CPU where it would use a
100% on test systems with idle IO. Effectively what the application does is
constantly doing lookups as fast as it can.

To counteract this I increased the page size to 8192 (Unix server with advfs
having 8K block sizes) and increased the MAX_PAGES value in sqliteInt.h to
512000. This worked. My application starts at low memory usage and as it
gradually gains more memory. As it gains more memory it uses more CPU and
reaches a point where it finally uses 100% CPU and 5Gb of ram.

Every now and then the lookup table is udpated. As soon as the application
does this however the performance goes back to a crawl and slowly builds up
again as described in the previous paragraph. The memory usage stays at 5Gb.
All that I can think of is that the update invalidates the cache. The update
is not very big, say 20 rows in a table that has about 45 million rows.

What exactly is happening here?

Regards.
-- 
View this message in context: 
http://www.nabble.com/SQLite-cache-question.-tf3944886.html#a11190296
Sent from the SQLite mailing list archive at Nabble.com.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Cache invalidation after insert statements.

2007-06-19 Thread pompomJuice

Hello there.

I need some insight into how SQLite's caching works. I have a database that
is quite large (5Gb) sitting on a production server that's IO is severely
taxed. This causes my SQLite db to perform very poorly. Most of the time my
application just sits there and uses about 10% of a CPU where it would use a
100% on test systems with idle IO. Effectively what the application does is
constantly doing lookups as fast as it can.

To counteract this I increased the page size to 8192 (Unix server with advfs
having 8K block sizes) and increased the MAX_PAGES value in sqliteInt.h to
512000. This worked. My application starts at low memory usage and as it
gradually gains more memory. As it gains more memory it uses more CPU and
reaches a point where it finally uses 100% CPU and 5Gb of ram.

Every now and then the lookup table is udpated. As soon as the application
does this however the performance goes back to a crawl and slowly builds up
again as described in the previous paragraph. The memory usage stays at 5Gb.
All that I can think of is that the update invalidates the cache. The update
is not very big, say 20 rows in a table that has about 45 million rows.

What exactly is happening here?

Regards.
-- 
View this message in context: 
http://www.nabble.com/Cache-invalidation-after-insert-statements.-tf3944881.html#a11190285
Sent from the SQLite mailing list archive at Nabble.com.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Cache invalidation after insert statements.

2007-06-19 Thread pompomJuice

Hello there.

I need some insight into how SQLite's caching works. I have a database that
is quite large (5Gb) sitting on a production server that's IO is severely
taxed. This causes my SQLite db to perform very poorly. Most of the time my
application just sits there and uses about 10% of a CPU where it would use a
100% on test systems with idle IO. Effectively what the application does is
constantly doing lookups as fast as it can.

To counteract this I increased the page size to 8192 (Unix server with advfs
having 8K block sizes) and increased the MAX_PAGES value in sqliteInt.h to
512000. This worked. My application starts at low memory usage and as it
gradually gains more memory. As it gains more memory it uses more CPU and
reaches a point where it finally uses 100% CPU and 5Gb of ram.

Every now and then the lookup table is udpated. As soon as the application
does this however the performance goes back to a crawl and slowly builds up
again as described in the previous paragraph. The memory usage stays at 5Gb.
All that I can think of is that the update invalidates the cache. The update
is not very big, say 20 rows in a table that has about 45 million rows.

What exactly is happening here?

Regards.
-- 
View this message in context: 
http://www.nabble.com/Cache-invalidation-after-insert-statements.-tf3944877.html#a11190278
Sent from the SQLite mailing list archive at Nabble.com.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] problems with .dump procedure

2007-06-19 Thread Roberto Davico

Hi all,

  I am Roberto and I find a problem using sqlite3 (version: SQLite 
version 3.3.8).


  I make a sample db to explain the situation:

  1) Create the sample database using command line tool:

sqlite> .schema
CREATE TABLE LOG (ID INTEGER PRIMARY KEY AUTOINCREMENT, MSG TEXT );
CREATE TABLE USERS (ID INTEGER PRIMARY KEY AUTOINCREMENT, NAME TEXT, 
SURNAME TEXT);

CREATE VIEW USR AS SELECT NAME FROM USERS;
CREATE TRIGGER USR_DELETE AFTER DELETE ON USERS BEGIN INSERT INTO LOG 
(ID, MSG) VALUES( NULL, 'Cancellato utente'); END;



  2) Export with .dump command procedure

sqlite> .output dumpo.txt
sqlite> .dump
sqlite> .output stdout
sqlite> .quit


  3) Analyze the dump file:

~$ cat dumpo.txt
BEGIN TRANSACTION;
CREATE TABLE USERS (ID INTEGER PRIMARY KEY AUTOINCREMENT, NAME TEXT, 
SURNAME TEXT);

DELETE FROM sqlite_sequence;
CREATE TABLE LOG (ID INTEGER PRIMARY KEY AUTOINCREMENT, MSG TEXT );
COMMIT;

where are my view and trigger?

Is it a problem on my handwork or it isn't possible to dump views or 
triggers? are there many settings to do before dump?


thanks all for any advice...

ciao
Roberto

-
To unsubscribe, send email to [EMAIL PROTECTED]
-