Re: [sqlite] page_size

2010-04-25 Thread Tom Broadbent

ah - good to know. that is what i was looking for.

thanks
tom

On Apr 25, 2010, at 10:21 PM, Dan Kennedy wrote:

> The page formats use 16-bit unsigned integers to store various offsets
> (in bytes) to cells and free-blocks within a page. So it definitely
> won't work with greater than 64KB pages.
> 
> Not sure if 64KB would work or not. Since it hasn't been tested, the
> answer is probably "No.".


__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] page_size

2010-04-25 Thread Tom Broadbent

thanks for the replies.

understood.  i was informed that our underlying (EMMC??) FS will do this w/ FS 
pages, i.e. read the entire 256k FS page, modify some small portion of it, and 
write it out again.  i'm higher in the stack so i don't understand the 
underlying FS mechanisms; i've simply asked the lower-level devs for advice 
about maximizing FS IO.  

on this particular FS i'm under the impression that writing a 1k SQLite page 
requires reading a 256k FS page, modifying the 1k portion and then writing the 
whole 256k FS page out. i'll verify w/ the lower-level devs next week.

yes, this sounds _very_ inefficient, but perhaps this is why our commit times 
are so poor (i.e. writing each DB page requires reading and writing a much 
larger FS page).

yes - i'm aware that each table and index are in a separate DB page; i saw 
evidence of this when bumping the page size from 1k to 32k.  

this DB has only one table w/ a single pkey index.  the DB will be relatively 
large (10MB) compared to the page size and will continue to grow over time, so 
i'm not concerned.

basically i want to know if SQLite will have internal problems running w/ a 
page_size greater than the recommended (required?) max of 32k.

thanks
tom


On Apr 21, 2010, at 1:39 PM, D. Richard Hipp wrote:

> 
> On Apr 21, 2010, at 4:37 PM, Pavel Ivanov wrote:
> 
>> I don't know anything about internal support of pages bigger than 32k.
>> But I want to warn you: each table and each index in SQLite occupy at
>> least 1 database page. So let's say you have 4 tables with 1
>> additional index each (besides 'integer primary key' one). With 256k
>> pages this schema will result in a database of more than 2 Mb without
>> any data stored. Is your embedded FS okay with this storage amount?
> 
> Furthermore, SQLite changes whole pages at a time.  So in a database  
> with 256kB pages, if you change a single byte, you still have to write  
> 256kB both to the rollback journal and to the database file.
> 
>> 
>> 
>> Pavel
>> 
>> On Tue, Apr 20, 2010 at 4:51 PM, Tom Broadbent
>> <tom.broadb...@plasticlogic.com> wrote:
>>> i've read in the docs that SQLITE_MAX_PAGE_SIZE can't be > 32k  
>>> (below).  is this limitation still valid?
>>> 
>>> we have an embedded FS that is _very_ slow and performs best w/ a  
>>> write page size of 256k.  will bad things happen if i configure  
>>> SQLite w/ 256k pages?
>>> 
>>> thanks
>>> tom
>>> 
>>> Maximum Database Page Size
>>> 
>>> An SQLite database file is organized as pages. The size of each  
>>> page is a power of 2 between 512 and SQLITE_MAX_PAGE_SIZE. The  
>>> default value for SQLITE_MAX_PAGE_SIZE is 32768. The current  
>>> implementation will not support a larger value.
>>> 
>>> It used to be the case that SQLite would allocate some stack  
>>> structures whose size was proportional to the maximum page size.  
>>> For this reason, SQLite would sometimes be compiled with a smaller  
>>> maximum page size on embedded devices with limited stack memory.  
>>> But more recent versions of SQLite put these large structures on  
>>> the heap, not on the stack, so reducing the maximum page size is no  
>>> longer necessary on embedded devices. There is no longer any real  
>>> reason to lower the maximum page size.
>>> 
>>> __
>>> This email has been scanned by the MessageLabs Email Security System.
>>> For more information please visit http://www.messagelabs.com/email
>>> __
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>> 
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> D. Richard Hipp
> d...@hwaci.com
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> __
> This email has been scanned by the MessageLabs Email Security System.
> For more information please visit http://www.messagelabs.com/email 
> __


__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] page_size

2010-04-20 Thread Tom Broadbent
i've read in the docs that SQLITE_MAX_PAGE_SIZE can't be > 32k (below).  is 
this limitation still valid?

we have an embedded FS that is _very_ slow and performs best w/ a write page 
size of 256k.  will bad things happen if i configure SQLite w/ 256k pages?

thanks
tom

Maximum Database Page Size

An SQLite database file is organized as pages. The size of each page is a power 
of 2 between 512 and SQLITE_MAX_PAGE_SIZE. The default value for 
SQLITE_MAX_PAGE_SIZE is 32768. The current implementation will not support a 
larger value.

It used to be the case that SQLite would allocate some stack structures whose 
size was proportional to the maximum page size. For this reason, SQLite would 
sometimes be compiled with a smaller maximum page size on embedded devices with 
limited stack memory. But more recent versions of SQLite put these large 
structures on the heap, not on the stack, so reducing the maximum page size is 
no longer necessary on embedded devices. There is no longer any real reason to 
lower the maximum page size.

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Pagesize and performance

2010-03-24 Thread Tom Broadbent
mainly just want to summarize and get confirmation on a few points.


1.   matching pagesize to file system record size is recommended for write 
performance improvements

2.   default SQLite pagesize is 1k; max SQLite pagesize is 32k

3.   docs seem to say that 32k max pagesize is no longer required, i.e. 
could possibly compile w/ larger max pagesize (and thus larger pagesize)

4.   larger pagesize will cause more data to be read from file system to do 
a read (i.e. an entire page must be read)

thanks
tom

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] begin transaction

2009-11-23 Thread Tom Broadbent
btw - would shared cache mode play a part in this behavior?  for example, if i 
have shared cache mode enabled then i could have a different thread in this 
process that has a shared lock at the time my thread is trying to begin trans 
immediate.  would this (effectively) cause a lock promotion from shared to 
reserved and thus fail immediately?

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Tom Broadbent
Sent: Monday, November 23, 2009 12:36 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] begin transaction

if there are open read cursors this is not by design.  i'm reviewing now and 
ensuring that read cursors are closed.

will review more, but all the reads are behind APIs that explicitly finalize 
(or reset) at the end of the API call..at least this is the intent.  it is 
possible that there is a bug but i've tried to be very explicit about 
finalizing/resetting statements.

thanks
tom 

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Pavel Ivanov
Sent: Monday, November 23, 2009 11:34 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] begin transaction

> what is interesting is that proc B's 'item not found in locker!' is a read 
> operation.  so possibly i've (inadvertently) hit the 'shared lock upgrade to 
> reserved lock' immediate fail.
>
> thoughts?

Did you reset/finalize statement after this read operation? Do you
have any other active (not reset/finalized) select statements in the
same connection in proc B when you try to begin immediate transaction?


Pavel

On Mon, Nov 23, 2009 at 2:28 PM, Tom Broadbent
<tom.broadb...@plasticlogic.com> wrote:
> thanks for the interesting comments.  here is the log that (i believe) 
> illustrates the problem:
>
> 092250 PID:07E50002 TID:07E60002 PLFS!INFO: attempting to begin IMMEDIATE 
> transaction
> 092260 PID:04310006 TID:0548000E PLFS!INFO: item not found in locker!
> 092267 PID:04310006 TID:0548000E PLFS!INFO: attempting to begin IMMEDIATE 
> transaction
> 092284 PID:07E50002 TID:07E60002 PLFS!INFO: attempting to end transaction
> 092391 PID:04310006 TID:0548000E PLFS!ERROR: [SQLite] 
> PLFS::PLFSProxy::BeginTransaction: Error 5; (.\PLFSProxy.cpp)
> 092399 PID:04310006 TID:0548000E PLFS!ERROR: 
> PLFS::PLFSProxy::BeginTransaction: Error 205 [PLFS_ERROR_DB_LOCK_FAILED]; 
> (.\PLFSProxy.cpp:3364)
> 092412 PID:04310006 TID:0548000E PLFS!ERROR: PLFS::PLFSProxy::DeleteId: Error 
> 205 [PLFS_ERROR_DB_LOCK_FAILED]; (.\PLFSProxy.cpp:2025)
> 092424 PID:04310006 TID:0548000E PLFS!ERROR: 
> PLFS::BnLockerProxy::SyncPlfsToLocker: Error 205 [PLFS_ERROR_DB_LOCK_FAILED]; 
> (.\PLFSBnLockerManager.cpp:272
>
> what this shows (to me) is that proc A (07E50002) got the IMMEDIATE 
> transaction and proc B (04310006) failed to get an IMMEDIATE transaction.
>
> what is interesting is that proc B's 'item not found in locker!' is a read 
> operation.  so possibly i've (inadvertently) hit the 'shared lock upgrade to 
> reserved lock' immediate fail.
>
> thoughts?
>
> thanks
> tom
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org 
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Pavel Ivanov
> Sent: Monday, November 23, 2009 7:36 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] begin transaction
>
>> That's true, but the comment is a bit deceptive, In this
>> particular case SQLite is supposed to invoke the busy-handler. What
>> should happen is that SQLite grabs the SHARED lock then fails to get
>> the RESERVED. But in this case SQLite is smart enough to know that
>> it can release the SHARED lock, invoke the busy-handler, then try
>> again.
>
> I'm glad it is so.
> BTW, this special case (with your great examples) as long as all cases
> when busy handler is invoked/not invoked are better to be mentioned
> somewhere in here http://www.sqlite.org/lockingv3.html or here
> http://www.sqlite.org/c3ref/busy_handler.html.
>
> Pavel
>
> On Mon, Nov 23, 2009 at 10:17 AM, Dan Kennedy <danielk1...@gmail.com> wrote:
>>
>> On Nov 23, 2009, at 9:26 PM, Pavel Ivanov wrote:
>>
>>> According to SQLite sources:
>>>
>>> ** The pager invokes the busy-handler if sqlite3OsLock() returns
>>> ** SQLITE_BUSY when trying to upgrade from no-lock to a SHARED lock,
>>> ** or when trying to upgrade from a RESERVED lock to an EXCLUSIVE
>>> ** lock. It does *not* invoke the busy handler when upgrading from
>>> ** SHARED to RESERVED, or when upgrading from SHARED to EXCLUSIVE
>>> ** (which occurs during hot-journal rollback). Summary:
>>> **
>>> **   Transition         

Re: [sqlite] begin transaction

2009-11-23 Thread Tom Broadbent
if there are open read cursors this is not by design.  i'm reviewing now and 
ensuring that read cursors are closed.

will review more, but all the reads are behind APIs that explicitly finalize 
(or reset) at the end of the API call..at least this is the intent.  it is 
possible that there is a bug but i've tried to be very explicit about 
finalizing/resetting statements.

thanks
tom 

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Pavel Ivanov
Sent: Monday, November 23, 2009 11:34 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] begin transaction

> what is interesting is that proc B's 'item not found in locker!' is a read 
> operation.  so possibly i've (inadvertently) hit the 'shared lock upgrade to 
> reserved lock' immediate fail.
>
> thoughts?

Did you reset/finalize statement after this read operation? Do you
have any other active (not reset/finalized) select statements in the
same connection in proc B when you try to begin immediate transaction?


Pavel

On Mon, Nov 23, 2009 at 2:28 PM, Tom Broadbent
<tom.broadb...@plasticlogic.com> wrote:
> thanks for the interesting comments.  here is the log that (i believe) 
> illustrates the problem:
>
> 092250 PID:07E50002 TID:07E60002 PLFS!INFO: attempting to begin IMMEDIATE 
> transaction
> 092260 PID:04310006 TID:0548000E PLFS!INFO: item not found in locker!
> 092267 PID:04310006 TID:0548000E PLFS!INFO: attempting to begin IMMEDIATE 
> transaction
> 092284 PID:07E50002 TID:07E60002 PLFS!INFO: attempting to end transaction
> 092391 PID:04310006 TID:0548000E PLFS!ERROR: [SQLite] 
> PLFS::PLFSProxy::BeginTransaction: Error 5; (.\PLFSProxy.cpp)
> 092399 PID:04310006 TID:0548000E PLFS!ERROR: 
> PLFS::PLFSProxy::BeginTransaction: Error 205 [PLFS_ERROR_DB_LOCK_FAILED]; 
> (.\PLFSProxy.cpp:3364)
> 092412 PID:04310006 TID:0548000E PLFS!ERROR: PLFS::PLFSProxy::DeleteId: Error 
> 205 [PLFS_ERROR_DB_LOCK_FAILED]; (.\PLFSProxy.cpp:2025)
> 092424 PID:04310006 TID:0548000E PLFS!ERROR: 
> PLFS::BnLockerProxy::SyncPlfsToLocker: Error 205 [PLFS_ERROR_DB_LOCK_FAILED]; 
> (.\PLFSBnLockerManager.cpp:272
>
> what this shows (to me) is that proc A (07E50002) got the IMMEDIATE 
> transaction and proc B (04310006) failed to get an IMMEDIATE transaction.
>
> what is interesting is that proc B's 'item not found in locker!' is a read 
> operation.  so possibly i've (inadvertently) hit the 'shared lock upgrade to 
> reserved lock' immediate fail.
>
> thoughts?
>
> thanks
> tom
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org 
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Pavel Ivanov
> Sent: Monday, November 23, 2009 7:36 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] begin transaction
>
>> That's true, but the comment is a bit deceptive, In this
>> particular case SQLite is supposed to invoke the busy-handler. What
>> should happen is that SQLite grabs the SHARED lock then fails to get
>> the RESERVED. But in this case SQLite is smart enough to know that
>> it can release the SHARED lock, invoke the busy-handler, then try
>> again.
>
> I'm glad it is so.
> BTW, this special case (with your great examples) as long as all cases
> when busy handler is invoked/not invoked are better to be mentioned
> somewhere in here http://www.sqlite.org/lockingv3.html or here
> http://www.sqlite.org/c3ref/busy_handler.html.
>
> Pavel
>
> On Mon, Nov 23, 2009 at 10:17 AM, Dan Kennedy <danielk1...@gmail.com> wrote:
>>
>> On Nov 23, 2009, at 9:26 PM, Pavel Ivanov wrote:
>>
>>> According to SQLite sources:
>>>
>>> ** The pager invokes the busy-handler if sqlite3OsLock() returns
>>> ** SQLITE_BUSY when trying to upgrade from no-lock to a SHARED lock,
>>> ** or when trying to upgrade from a RESERVED lock to an EXCLUSIVE
>>> ** lock. It does *not* invoke the busy handler when upgrading from
>>> ** SHARED to RESERVED, or when upgrading from SHARED to EXCLUSIVE
>>> ** (which occurs during hot-journal rollback). Summary:
>>> **
>>> **   Transition                        | Invokes xBusyHandler
>>> **   
>>> **   NO_LOCK       -> SHARED_LOCK      | Yes
>>> **   SHARED_LOCK   -> RESERVED_LOCK    | No
>>> **   SHARED_LOCK   -> EXCLUSIVE_LOCK   | No
>>> **   RESERVED_LOCK -> EXCLUSIVE_LOCK   | Yes
>>>
>>>
>>> When you issue BEGIN IMMEDIATE statement SQLite has first obtain
>>> SHARED lock and right after that RESERVED lock too.
>>
>> That's true, but the comment is a bit deceptive, In this
>> p

Re: [sqlite] begin transaction

2009-11-23 Thread Tom Broadbent
 written something to disk but RESERVED lock will
>> fail and no busy handler is called here.
>>
>> Besides that I cannot say what thoughts are behind this design
>> solution.
>>
>>
>> Pavel
>>
>> On Mon, Nov 23, 2009 at 9:15 AM, O'Neill, Owen <oone...@averyberkel.com
>> > wrote:
>>>
>>>
>>> Hi Tom,
>>>
>>> Whilst not knowing much about the process, I have a recollection
>>> about
>>> something in the documentation that said if sqlite thought that there
>>> was a potential for deadlock the busy handler was never even called.
>>>
>>> Could that explain this ?
>>>
>>> Cheers
>>> Owen
>>>
>>>
>>>
>>> -Original Message-
>>> From: sqlite-users-boun...@sqlite.org
>>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
>>> Sent: Friday, November 20, 2009 7:27 PM
>>> To: sqlite-users@sqlite.org
>>> Subject: [sqlite] begin transaction
>>>
>>> i have several processes all trying to write to a given DB.  i have
>>> set
>>> sqlite3_busy_timeout to 1 (10 sec).  when proc A successfully
>>> beings
>>> a trasaction IMMEDIATE what appears to happen when proc B attempts to
>>> begin transaction IMMEDIATE is that it fails immediately w/
>>> SQLITE_BUSY.
>>> i would expect that proc B would wait 10 sec trying to begin
>>> transaction
>>> rather than fail immediately.
>>>
>>> is this expected?
>>>
>>> thanks
>>> tom
>>>
>>> __
>>> This email has been scanned by the MessageLabs Email Security System.
>>> For more information please visit http://www.messagelabs.com/email
>>> __
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] begin transaction

2009-11-20 Thread Tom Broadbent
i have several processes all trying to write to a given DB.  i have set 
sqlite3_busy_timeout to 1 (10 sec).  when proc A successfully beings a 
trasaction IMMEDIATE what appears to happen when proc B attempts to begin 
transaction IMMEDIATE is that it fails immediately w/ SQLITE_BUSY.  i would 
expect that proc B would wait 10 sec trying to begin transaction rather than 
fail immediately.

is this expected?

thanks
tom

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] shared cache mode and 'LOCKED'

2009-10-28 Thread Tom Broadbent
i guess this isn't that complicated.  the error codes even say basically what 
you've said:

#define SQLITE_BUSY 5   /* The database file is locked */
#define SQLITE_LOCKED   6   /* A table in the database is locked */

i guess the point is that separate connections normally lock the entire DB file 
but in shared cache mode two connections (in the same process) can both have 
access to the DB file but not to the same table.  you've said this below as 
well.

the point is that in the diagram here (http://www.sqlite.org/sharedcache.html) 
if conn1 writes to tab1 then conn2 and conn3 will get SQLITE_BUSY, yes?  if 
conn2 writes to tab1 then conn1 will get SQLITE_BUSY but conn3 will get 
SQLITE_LOCKED (if trying to write to tab1; will succeed if trying to write to 
tab2).

correct?

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of John Crenshaw
Sent: Wednesday, October 28, 2009 12:49 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] shared cache mode and 'LOCKED'

Almost. Locking happens at a table level in this case, not a database
level. Three different threads can all write at the same time, if they
write to different tables. But, if two threads write try to the same
table at the same time, one of them will return SQLITE_LOCKED.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Wednesday, October 28, 2009 3:46 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] shared cache mode and 'LOCKED'

oh, right.  my bad.  i don't mean to share a connection between two
threads, but rather that each thread (with its own connection) in the
same process where shared cache mode is enabled will cause SQLITE_LOCKED
error rather than SQLITE_BUSY error when these threads contend for the
DB.

is this right?


-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Crenshaw
Sent: Wednesday, October 28, 2009 12:38 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] shared cache mode and 'LOCKED'

I don't know. Elsewhere it says you really shouldn't use the same
connection in multiple threads. I use a different connection in each
thread. With the shared cache, this results in very little overhead, so
I'm unsure why you would need to do this the "not recommended" way.

The contention between connections only applies to other processes if
the shared cache is enabled. With the shared cache each process will
lock the whole database, but connections in threads within that process
will only lock individual tables. This is really the right way to do a
multithreaded application, because otherwise contention is too great.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Wednesday, October 28, 2009 3:32 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] shared cache mode and 'LOCKED'

to be clear...

"in other words, two threads sharing a connection in shared cache mode
will always cause SQLITE_LOCKED (rather than SQLITE_BUSY) when
contention occurs _between the two threads_.  if contention occurs from
another connection (i.e. a connection in a different process)
SQLITE_BUSY will be returned."

i believe this is correct.  experts?

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Wednesday, October 28, 2009 12:27 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] shared cache mode and 'LOCKED'

i'm no expert on this, but my understanding is that since shared cache
mode 'shares a connection' you won't get SQLITE_BUSY but rather
SQLITE_LOCKED since the contention is 'internal' to the connection.

in other words, two threads sharing a connection in shared cache mode
will always cause SQLITE_LOCKED (rather than SQLITE_BUSY) when
contention occurs.

experts: pls correct me if i'm wrong here.

here is the advice i received when asking about a similar situation:

>> If other threads may also need a write lock on that table, you should
>> handle SQLITE_LOCKED by incrementing a waiter count and calling
>> sqlite3_unlock_notify. The thread doing the inserting can check to
>> see if anybody is waiting (blocked) and yield by committing the
>> current transaction and waiting for the blocked thread to unblock. Be
>> aware, you should also close any open cursors before yielding,
>> because open cursors will prevent write locks and you'll waste time
>> yielding for nothing.
>>
>> John

hope this helps (and isn't incorrect).

thanks
tom

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of O'Neill, Owen
Sent

Re: [sqlite] shared cache mode and 'LOCKED'

2009-10-28 Thread Tom Broadbent
oh, right.  my bad.  i don't mean to share a connection between two threads, 
but rather that each thread (with its own connection) in the same process where 
shared cache mode is enabled will cause SQLITE_LOCKED error rather than 
SQLITE_BUSY error when these threads contend for the DB.

is this right?


-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of John Crenshaw
Sent: Wednesday, October 28, 2009 12:38 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] shared cache mode and 'LOCKED'

I don't know. Elsewhere it says you really shouldn't use the same
connection in multiple threads. I use a different connection in each
thread. With the shared cache, this results in very little overhead, so
I'm unsure why you would need to do this the "not recommended" way.

The contention between connections only applies to other processes if
the shared cache is enabled. With the shared cache each process will
lock the whole database, but connections in threads within that process
will only lock individual tables. This is really the right way to do a
multithreaded application, because otherwise contention is too great.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Wednesday, October 28, 2009 3:32 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] shared cache mode and 'LOCKED'

to be clear...

"in other words, two threads sharing a connection in shared cache mode
will always cause SQLITE_LOCKED (rather than SQLITE_BUSY) when
contention occurs _between the two threads_.  if contention occurs from
another connection (i.e. a connection in a different process)
SQLITE_BUSY will be returned."

i believe this is correct.  experts?

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Wednesday, October 28, 2009 12:27 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] shared cache mode and 'LOCKED'

i'm no expert on this, but my understanding is that since shared cache
mode 'shares a connection' you won't get SQLITE_BUSY but rather
SQLITE_LOCKED since the contention is 'internal' to the connection.

in other words, two threads sharing a connection in shared cache mode
will always cause SQLITE_LOCKED (rather than SQLITE_BUSY) when
contention occurs.

experts: pls correct me if i'm wrong here.

here is the advice i received when asking about a similar situation:

>> If other threads may also need a write lock on that table, you should
>> handle SQLITE_LOCKED by incrementing a waiter count and calling
>> sqlite3_unlock_notify. The thread doing the inserting can check to
>> see if anybody is waiting (blocked) and yield by committing the
>> current transaction and waiting for the blocked thread to unblock. Be
>> aware, you should also close any open cursors before yielding,
>> because open cursors will prevent write locks and you'll waste time
>> yielding for nothing.
>>
>> John

hope this helps (and isn't incorrect).

thanks
tom

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of O'Neill, Owen
Sent: Wednesday, October 28, 2009 10:45 AM
To: General Discussion of SQLite Database
Subject: [sqlite] shared cache mode and 'LOCKED'



Hi Everyone,
Does anyone know if this page is still up to date with respect to when
you get "SQLITE_LOCKED" when operating in shared cache mode ?
http://www.sqlite.org/cvstrac/wiki?p=DatabaseIsLocked


(I'm trying to solve a two writers problem and am trying to understand
the best way to solve it)
(and I think part of my problem is not understanding the difference
between  'locked' and 'busy' )

I've seen the notify example here.
http://www.sqlite.org/unlock_notify.html

and a 'busy' example here
http://www.sqlite.org/cvstrac/wiki?p=MultiThreading
http://www.sqlite.org/cvstrac/wiki?p=SampleCode

Is it possible for something to be 'locked' then after being unlocked at
it tries again it gets 'busy' ?

Should my re-try strategy be the same or different for 'busy' and
'locked' and I guess if I get SQLITE_IOERR_BLOCKED
(http://www.sqlite.org/c3ref/busy_handler.html) I should always back off
and wait a while ?

Any tips for the 'best' way to tackle this gratefully received.

(I have one thread writing a lot but it can block for a 'long' time and
still be ok (up to 5 seconds) - and another one mostly reading and doing
a few occasional writes, but it can't block for long (>250ms) because
it's servicing the UI and repainting will stop.)

many thanks
Owen


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___

Re: [sqlite] shared cache mode and 'LOCKED'

2009-10-28 Thread Tom Broadbent
and here is the link to the thread where i received the below advice:

http://sqlite.org:8080/cgi-bin/mailman/private/sqlite-users/2009-October/016404.html


-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Tom Broadbent
Sent: Wednesday, October 28, 2009 12:32 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] shared cache mode and 'LOCKED'

to be clear...

"in other words, two threads sharing a connection in shared cache mode will 
always cause SQLITE_LOCKED (rather than SQLITE_BUSY) when contention occurs 
_between the two threads_.  if contention occurs from another connection (i.e. 
a connection in a different process) SQLITE_BUSY will be returned."

i believe this is correct.  experts?

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Tom Broadbent
Sent: Wednesday, October 28, 2009 12:27 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] shared cache mode and 'LOCKED'

i'm no expert on this, but my understanding is that since shared cache mode 
'shares a connection' you won't get SQLITE_BUSY but rather SQLITE_LOCKED since 
the contention is 'internal' to the connection.

in other words, two threads sharing a connection in shared cache mode will 
always cause SQLITE_LOCKED (rather than SQLITE_BUSY) when contention occurs.

experts: pls correct me if i'm wrong here.

here is the advice i received when asking about a similar situation:

>> If other threads may also need a write lock on that table, you should
>> handle SQLITE_LOCKED by incrementing a waiter count and calling
>> sqlite3_unlock_notify. The thread doing the inserting can check to
>> see if anybody is waiting (blocked) and yield by committing the
>> current transaction and waiting for the blocked thread to unblock. Be
>> aware, you should also close any open cursors before yielding,
>> because open cursors will prevent write locks and you'll waste time
>> yielding for nothing.
>>
>> John

hope this helps (and isn't incorrect).

thanks
tom

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of O'Neill, Owen
Sent: Wednesday, October 28, 2009 10:45 AM
To: General Discussion of SQLite Database
Subject: [sqlite] shared cache mode and 'LOCKED'



Hi Everyone,
Does anyone know if this page is still up to date with respect to when
you get "SQLITE_LOCKED" when operating in shared cache mode ?
http://www.sqlite.org/cvstrac/wiki?p=DatabaseIsLocked


(I'm trying to solve a two writers problem and am trying to understand
the best way to solve it)
(and I think part of my problem is not understanding the difference
between  'locked' and 'busy' )

I've seen the notify example here.
http://www.sqlite.org/unlock_notify.html

and a 'busy' example here
http://www.sqlite.org/cvstrac/wiki?p=MultiThreading
http://www.sqlite.org/cvstrac/wiki?p=SampleCode

Is it possible for something to be 'locked' then after being unlocked at
it tries again it gets 'busy' ?

Should my re-try strategy be the same or different for 'busy' and
'locked' and I guess if I get SQLITE_IOERR_BLOCKED
(http://www.sqlite.org/c3ref/busy_handler.html) I should always back off
and wait a while ?

Any tips for the 'best' way to tackle this gratefully received.

(I have one thread writing a lot but it can block for a 'long' time and
still be ok (up to 5 seconds) - and another one mostly reading and doing
a few occasional writes, but it can't block for long (>250ms) because
it's servicing the UI and repainting will stop.)

many thanks
Owen


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__

__
This email has been scann

Re: [sqlite] commit time

2009-10-24 Thread Tom Broadbent
the plot thickens..

we're running WinCE.  i'm using a service to do the writing (worker thread 
running in servicesd process).  i'm interested in using a synchronous ioctl 
call on the service to perform the reading.  this read ioctl call will be made 
from another process (not servicesd) and since WinCE uses thread migration in 
this situation it appears that i'll have a thread from the calling process 
migrate to the servicesd process where the reading will happen.  so during the 
ioctl call the calling thread will be in the servicesd process but said thread 
originated in a different process.

question: does thread migration cause issues for SQLite read uncommitted mode?

to be safe i can use an async ioctl and let the same worker thread in servicesd 
(writer thread) do the reading (this should surely be safe for SQLite read 
uncommitted mode).

the async approach complicates things for me but if there is concern about 
thread migration causing issues i'd rather be safe than sorry..

thanks
tom

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Tom Broadbent
Sent: Thursday, October 22, 2009 10:51 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

thanks for the discussion.  i'll keep my eyes open for lock contention.  i'm 
going to start w/ the simple approach first and see how it goes.

thanks
tom

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of John Crenshaw
Sent: Wednesday, October 21, 2009 11:00 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Dangerous and disturbing this puzzle is. Only a bug could have locked
those connections.

If I discover anything useful I'll report it separately (no need to
hijack this topic for that.)

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
Sent: Thursday, October 22, 2009 1:27 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time


On Oct 22, 2009, at 11:37 AM, John Crenshaw wrote:

> An open cursor will block. I've watched it. It was a major problem,
> and
> I spent many many hours stepping through SQLite before I finally
> figured
> it out. Once I carefully closed out cursors, the problem went away.
> (In
> my case I had a long running write process trying to commit a
> transaction so it could yield to another connection in a separate
> thread
> that wanted to write. If cursors were open on a table, the other
> connection would refuse to grab a write lock on that table, even
> though
> the transaction was committed and there were no open writers.)
>
> I don't remember where for sure (may have been in
> sqlite3BtreeBeginTrans) SQLite specifically checks for open cursors.
>
> The write lock doesn't stop you from reading, but an open cursor DOES
> stop you from writing. You have to check for SQLITE_LOCKED, no way
> around it.

I don't understand the situation described in the first paragraph. But
the statement above is at least not universally true. Tcl test cases
"shared-[01].3.11" and "shared-[01].3.1.2" (see around line 229 of
the test/shared.test file in the source distribution) are examples of
one connection writing to a table while a second connection is scanning
through the same table using an open cursor. In this case it is the
"second connection" is operating in read_uncommitted mode.

Dan.




>
> John
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
> Sent: Thursday, October 22, 2009 12:06 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] commit time
>
>
> On Oct 22, 2009, at 5:21 AM, Tom Broadbent wrote:
>
>> if thread 1 opens a read cursor in read uncommitted mode it can
>> block a write lock?  i thought the read happens w/o a lock?
>
> If using read-uncommitted mode, a reader thread will not block a
> writer thread that is using the same shared-cache. Except, it does
> block a writer from modifying the database schema.
>
> Dan.
>
>
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org
>> ] On Behalf Of John Crenshaw
>> Sent: Wednesday, October 21, 2009 12:03 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] commit time
>>
>> Good, a single write thread saves you all the hassle involved with
>> yielding. Unfortunately, even without multiple writers blocking is
>> still
>> possible. If thread 1 opens a cursor, and thread 2 tries to write
>> before
>> that cursor has been closed, it will return SQLITE_

Re: [sqlite] commit time

2009-10-22 Thread Tom Broadbent
thanks for the discussion.  i'll keep my eyes open for lock contention.  i'm 
going to start w/ the simple approach first and see how it goes.

thanks
tom

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of John Crenshaw
Sent: Wednesday, October 21, 2009 11:00 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Dangerous and disturbing this puzzle is. Only a bug could have locked
those connections.

If I discover anything useful I'll report it separately (no need to
hijack this topic for that.)

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
Sent: Thursday, October 22, 2009 1:27 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time


On Oct 22, 2009, at 11:37 AM, John Crenshaw wrote:

> An open cursor will block. I've watched it. It was a major problem,
> and
> I spent many many hours stepping through SQLite before I finally
> figured
> it out. Once I carefully closed out cursors, the problem went away.
> (In
> my case I had a long running write process trying to commit a
> transaction so it could yield to another connection in a separate
> thread
> that wanted to write. If cursors were open on a table, the other
> connection would refuse to grab a write lock on that table, even
> though
> the transaction was committed and there were no open writers.)
>
> I don't remember where for sure (may have been in
> sqlite3BtreeBeginTrans) SQLite specifically checks for open cursors.
>
> The write lock doesn't stop you from reading, but an open cursor DOES
> stop you from writing. You have to check for SQLITE_LOCKED, no way
> around it.

I don't understand the situation described in the first paragraph. But
the statement above is at least not universally true. Tcl test cases
"shared-[01].3.11" and "shared-[01].3.1.2" (see around line 229 of
the test/shared.test file in the source distribution) are examples of
one connection writing to a table while a second connection is scanning
through the same table using an open cursor. In this case it is the
"second connection" is operating in read_uncommitted mode.

Dan.




>
> John
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
> Sent: Thursday, October 22, 2009 12:06 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] commit time
>
>
> On Oct 22, 2009, at 5:21 AM, Tom Broadbent wrote:
>
>> if thread 1 opens a read cursor in read uncommitted mode it can
>> block a write lock?  i thought the read happens w/o a lock?
>
> If using read-uncommitted mode, a reader thread will not block a
> writer thread that is using the same shared-cache. Except, it does
> block a writer from modifying the database schema.
>
> Dan.
>
>
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org
>> ] On Behalf Of John Crenshaw
>> Sent: Wednesday, October 21, 2009 12:03 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] commit time
>>
>> Good, a single write thread saves you all the hassle involved with
>> yielding. Unfortunately, even without multiple writers blocking is
>> still
>> possible. If thread 1 opens a cursor, and thread 2 tries to write
>> before
>> that cursor has been closed, it will return SQLITE_LOCKED. Since any
>> read query will return a cursor, there is always a possibility for
>> blocking, and you need to handle SQLITE_LOCKED.
>>
>> John
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
>> Sent: Wednesday, October 21, 2009 2:09 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] commit time
>>
>> very good.  i don't anticipate multiple writers so this should be
>> pretty
>> simple.
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Crenshaw
>> Sent: Wednesday, October 21, 2009 9:15 AM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] commit time
>>
>> Yes, you have to call sqlite3_enable_shared_cache before opening any
>> database connections, then execute "PRAGMA read_uncommitted = true;"
>> on
>> each connection. Blocking can still happen in some situations, but
>> you
>> can handle it as I described in my original reply.
>>

Re: [sqlite] commit time

2009-10-21 Thread Tom Broadbent
if thread 1 opens a read cursor in read uncommitted mode it can block a write 
lock?  i thought the read happens w/o a lock?

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of John Crenshaw
Sent: Wednesday, October 21, 2009 12:03 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Good, a single write thread saves you all the hassle involved with
yielding. Unfortunately, even without multiple writers blocking is still
possible. If thread 1 opens a cursor, and thread 2 tries to write before
that cursor has been closed, it will return SQLITE_LOCKED. Since any
read query will return a cursor, there is always a possibility for
blocking, and you need to handle SQLITE_LOCKED.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Wednesday, October 21, 2009 2:09 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

very good.  i don't anticipate multiple writers so this should be pretty
simple.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Crenshaw
Sent: Wednesday, October 21, 2009 9:15 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Yes, you have to call sqlite3_enable_shared_cache before opening any
database connections, then execute "PRAGMA read_uncommitted = true;" on
each connection. Blocking can still happen in some situations, but you
can handle it as I described in my original reply.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Wednesday, October 21, 2009 12:05 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time


it sounds like this is the feature you recommend using:

"A database connection in read-uncommitted mode _does not attempt to
obtain read-locks before reading_ from database tables as described
above. This can lead to inconsistent query results if another database
connection modifies a table while it is being read, but it also means
that a read-transaction opened by a connection in read-uncommitted mode
can neither block nor be blocked by any other connection."

this is precisely what i need.  thanks very much.


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org]
On Behalf Of John Crenshaw [johncrens...@priacta.com]
Sent: Tuesday, October 20, 2009 7:18 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Sounds like a great candidate for shared cache with PRAGMA
read_uncommitted = true.

If other threads may also need a write lock on that table, you should
handle SQLITE_LOCKED by incrementing a waiter count and calling
sqlite3_unlock_notify. The thread doing the inserting can check to see
if anybody is waiting (blocked) and yield by committing the current
transaction and waiting for the blocked thread to unblock. Be aware, you
should also close any open cursors before yielding, because open cursors
will prevent write locks and you'll waste time yielding for nothing.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Tuesday, October 20, 2009 8:05 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] commit time

i have a simple join table containing two ids from two other tables.  i
have an index on each of the ids in the join table.

CREATE TABLE ContentWordItem (word_id INT, item_id INT);

CREATE INDEX idx_ContentWordItem_word ON ContentWordItem(word_id);
// index to perform fast queries by word_id

CREATE INDEX idx_ContentWordItem_item ON ContentWordItem(item_id);
// index to perform fast deletes by item_id

i have a large amount of data to insert into this join table on a slow
embedded device.  i need to avoid locking this join table for more than
a second or two at a time so that i can make queries on this table.

so here's the question:  how do i insert small chunks of data into this
table w/o taking a hit each time i commit?

what i'm doing is:

* read a chunk of data from flat data file into vector of id
pairs

* begin transaction

* loop thru vector of id pairs binding and inserting

* commit transaction

* repeat until data is exhausted

i'm seeing that the reading, binding, and inserting is very fast (300
ms) but the commit is taking upwards of 3 seconds.  when i increase my
chunk size by a factor of 10 the insert doesn't appear to take 10x
longer but the commit still takes upwards of 3 seconds.  the point is
that the commit hit appears to be much greater than the insert hit but
doesn't appear to scale directly.

it appears that the commit is updating the indexes and taking a long
time.  is this 

Re: [sqlite] commit time

2009-10-21 Thread Tom Broadbent
very good.  i don't anticipate multiple writers so this should be pretty simple.

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of John Crenshaw
Sent: Wednesday, October 21, 2009 9:15 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Yes, you have to call sqlite3_enable_shared_cache before opening any
database connections, then execute "PRAGMA read_uncommitted = true;" on
each connection. Blocking can still happen in some situations, but you
can handle it as I described in my original reply.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Wednesday, October 21, 2009 12:05 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time


it sounds like this is the feature you recommend using:

"A database connection in read-uncommitted mode _does not attempt to
obtain read-locks before reading_ from database tables as described
above. This can lead to inconsistent query results if another database
connection modifies a table while it is being read, but it also means
that a read-transaction opened by a connection in read-uncommitted mode
can neither block nor be blocked by any other connection."

this is precisely what i need.  thanks very much.


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org]
On Behalf Of John Crenshaw [johncrens...@priacta.com]
Sent: Tuesday, October 20, 2009 7:18 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Sounds like a great candidate for shared cache with PRAGMA
read_uncommitted = true.

If other threads may also need a write lock on that table, you should
handle SQLITE_LOCKED by incrementing a waiter count and calling
sqlite3_unlock_notify. The thread doing the inserting can check to see
if anybody is waiting (blocked) and yield by committing the current
transaction and waiting for the blocked thread to unblock. Be aware, you
should also close any open cursors before yielding, because open cursors
will prevent write locks and you'll waste time yielding for nothing.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Tuesday, October 20, 2009 8:05 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] commit time

i have a simple join table containing two ids from two other tables.  i
have an index on each of the ids in the join table.

CREATE TABLE ContentWordItem (word_id INT, item_id INT);

CREATE INDEX idx_ContentWordItem_word ON ContentWordItem(word_id);
// index to perform fast queries by word_id

CREATE INDEX idx_ContentWordItem_item ON ContentWordItem(item_id);
// index to perform fast deletes by item_id

i have a large amount of data to insert into this join table on a slow
embedded device.  i need to avoid locking this join table for more than
a second or two at a time so that i can make queries on this table.

so here's the question:  how do i insert small chunks of data into this
table w/o taking a hit each time i commit?

what i'm doing is:

* read a chunk of data from flat data file into vector of id
pairs

* begin transaction

* loop thru vector of id pairs binding and inserting

* commit transaction

* repeat until data is exhausted

i'm seeing that the reading, binding, and inserting is very fast (300
ms) but the commit is taking upwards of 3 seconds.  when i increase my
chunk size by a factor of 10 the insert doesn't appear to take 10x
longer but the commit still takes upwards of 3 seconds.  the point is
that the commit hit appears to be much greater than the insert hit but
doesn't appear to scale directly.

it appears that the commit is updating the indexes and taking a long
time.  is this a correct evaluation?

it also appears that the commit takes longer as the size of the table
grows (i.e. the index is getting bigger).  is this expected?

what i'm worried about is that by reducing the chunk size (to avoid
locking the db for a long time) i add a significant amount of time to
the insert process because the commits are costing several seconds.
however, locking the db for a long time is not desirable.

i'm also concerned about the commit time increasing over time as the
amount of data in the table increases.

is there a better approach?

thanks
tom

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
__

Re: [sqlite] commit time

2009-10-21 Thread Tom Broadbent
hmm.. okay. i'll have to refactor a bit (currently two separate processes).

this is still very helpful.  thanks.

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of John Crenshaw
Sent: Wednesday, October 21, 2009 9:19 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

My understanding is that the shared cache allows table level locking for
multiple threads in a single process, and can do so efficiently because
the threads all share the same memory space, but if multiple processes
attempt to access the database, they will each use the original (full
database lock) methods for concurrency. Therefore, if my understanding
is correct, the "elsewhere" is the location that describes the normal
database level locking.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Wednesday, October 21, 2009 12:03 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time


reading up on shared cache mode and found this:

"The locking protocol used to arbitrate between multiple shared-caches
or regular database users is described _elsewhere_."

where is this described?


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org]
On Behalf Of John Crenshaw [johncrens...@priacta.com]
Sent: Tuesday, October 20, 2009 7:18 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Sounds like a great candidate for shared cache with PRAGMA
read_uncommitted = true.

If other threads may also need a write lock on that table, you should
handle SQLITE_LOCKED by incrementing a waiter count and calling
sqlite3_unlock_notify. The thread doing the inserting can check to see
if anybody is waiting (blocked) and yield by committing the current
transaction and waiting for the blocked thread to unblock. Be aware, you
should also close any open cursors before yielding, because open cursors
will prevent write locks and you'll waste time yielding for nothing.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Tuesday, October 20, 2009 8:05 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] commit time

i have a simple join table containing two ids from two other tables.  i
have an index on each of the ids in the join table.

CREATE TABLE ContentWordItem (word_id INT, item_id INT);

CREATE INDEX idx_ContentWordItem_word ON ContentWordItem(word_id);
// index to perform fast queries by word_id

CREATE INDEX idx_ContentWordItem_item ON ContentWordItem(item_id);
// index to perform fast deletes by item_id

i have a large amount of data to insert into this join table on a slow
embedded device.  i need to avoid locking this join table for more than
a second or two at a time so that i can make queries on this table.

so here's the question:  how do i insert small chunks of data into this
table w/o taking a hit each time i commit?

what i'm doing is:

* read a chunk of data from flat data file into vector of id
pairs

* begin transaction

* loop thru vector of id pairs binding and inserting

* commit transaction

* repeat until data is exhausted

i'm seeing that the reading, binding, and inserting is very fast (300
ms) but the commit is taking upwards of 3 seconds.  when i increase my
chunk size by a factor of 10 the insert doesn't appear to take 10x
longer but the commit still takes upwards of 3 seconds.  the point is
that the commit hit appears to be much greater than the insert hit but
doesn't appear to scale directly.

it appears that the commit is updating the indexes and taking a long
time.  is this a correct evaluation?

it also appears that the commit takes longer as the size of the table
grows (i.e. the index is getting bigger).  is this expected?

what i'm worried about is that by reducing the chunk size (to avoid
locking the db for a long time) i add a significant amount of time to
the insert process because the commits are costing several seconds.
however, locking the db for a long time is not desirable.

i'm also concerned about the commit time increasing over time as the
amount of data in the table increases.

is there a better approach?

thanks
tom

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mai

Re: [sqlite] commit time

2009-10-21 Thread Tom Broadbent

it sounds like this is the feature you recommend using:

"A database connection in read-uncommitted mode _does not attempt to obtain 
read-locks before reading_ from database tables as described above. This can 
lead to inconsistent query results if another database connection modifies a 
table while it is being read, but it also means that a read-transaction opened 
by a connection in read-uncommitted mode can neither block nor be blocked by 
any other connection."

this is precisely what i need.  thanks very much.


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] On 
Behalf Of John Crenshaw [johncrens...@priacta.com]
Sent: Tuesday, October 20, 2009 7:18 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Sounds like a great candidate for shared cache with PRAGMA
read_uncommitted = true.

If other threads may also need a write lock on that table, you should
handle SQLITE_LOCKED by incrementing a waiter count and calling
sqlite3_unlock_notify. The thread doing the inserting can check to see
if anybody is waiting (blocked) and yield by committing the current
transaction and waiting for the blocked thread to unblock. Be aware, you
should also close any open cursors before yielding, because open cursors
will prevent write locks and you'll waste time yielding for nothing.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Tuesday, October 20, 2009 8:05 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] commit time

i have a simple join table containing two ids from two other tables.  i
have an index on each of the ids in the join table.

CREATE TABLE ContentWordItem (word_id INT, item_id INT);

CREATE INDEX idx_ContentWordItem_word ON ContentWordItem(word_id);
// index to perform fast queries by word_id

CREATE INDEX idx_ContentWordItem_item ON ContentWordItem(item_id);
// index to perform fast deletes by item_id

i have a large amount of data to insert into this join table on a slow
embedded device.  i need to avoid locking this join table for more than
a second or two at a time so that i can make queries on this table.

so here's the question:  how do i insert small chunks of data into this
table w/o taking a hit each time i commit?

what i'm doing is:

* read a chunk of data from flat data file into vector of id
pairs

* begin transaction

* loop thru vector of id pairs binding and inserting

* commit transaction

* repeat until data is exhausted

i'm seeing that the reading, binding, and inserting is very fast (300
ms) but the commit is taking upwards of 3 seconds.  when i increase my
chunk size by a factor of 10 the insert doesn't appear to take 10x
longer but the commit still takes upwards of 3 seconds.  the point is
that the commit hit appears to be much greater than the insert hit but
doesn't appear to scale directly.

it appears that the commit is updating the indexes and taking a long
time.  is this a correct evaluation?

it also appears that the commit takes longer as the size of the table
grows (i.e. the index is getting bigger).  is this expected?

what i'm worried about is that by reducing the chunk size (to avoid
locking the db for a long time) i add a significant amount of time to
the insert process because the commits are costing several seconds.
however, locking the db for a long time is not desirable.

i'm also concerned about the commit time increasing over time as the
amount of data in the table increases.

is there a better approach?

thanks
tom

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] commit time

2009-10-21 Thread Tom Broadbent

reading up on shared cache mode and found this:

"The locking protocol used to arbitrate between multiple shared-caches or 
regular database users is described _elsewhere_."

where is this described?


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] On 
Behalf Of John Crenshaw [johncrens...@priacta.com]
Sent: Tuesday, October 20, 2009 7:18 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Sounds like a great candidate for shared cache with PRAGMA
read_uncommitted = true.

If other threads may also need a write lock on that table, you should
handle SQLITE_LOCKED by incrementing a waiter count and calling
sqlite3_unlock_notify. The thread doing the inserting can check to see
if anybody is waiting (blocked) and yield by committing the current
transaction and waiting for the blocked thread to unblock. Be aware, you
should also close any open cursors before yielding, because open cursors
will prevent write locks and you'll waste time yielding for nothing.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Tuesday, October 20, 2009 8:05 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] commit time

i have a simple join table containing two ids from two other tables.  i
have an index on each of the ids in the join table.

CREATE TABLE ContentWordItem (word_id INT, item_id INT);

CREATE INDEX idx_ContentWordItem_word ON ContentWordItem(word_id);
// index to perform fast queries by word_id

CREATE INDEX idx_ContentWordItem_item ON ContentWordItem(item_id);
// index to perform fast deletes by item_id

i have a large amount of data to insert into this join table on a slow
embedded device.  i need to avoid locking this join table for more than
a second or two at a time so that i can make queries on this table.

so here's the question:  how do i insert small chunks of data into this
table w/o taking a hit each time i commit?

what i'm doing is:

* read a chunk of data from flat data file into vector of id
pairs

* begin transaction

* loop thru vector of id pairs binding and inserting

* commit transaction

* repeat until data is exhausted

i'm seeing that the reading, binding, and inserting is very fast (300
ms) but the commit is taking upwards of 3 seconds.  when i increase my
chunk size by a factor of 10 the insert doesn't appear to take 10x
longer but the commit still takes upwards of 3 seconds.  the point is
that the commit hit appears to be much greater than the insert hit but
doesn't appear to scale directly.

it appears that the commit is updating the indexes and taking a long
time.  is this a correct evaluation?

it also appears that the commit takes longer as the size of the table
grows (i.e. the index is getting bigger).  is this expected?

what i'm worried about is that by reducing the chunk size (to avoid
locking the db for a long time) i add a significant amount of time to
the insert process because the commits are costing several seconds.
however, locking the db for a long time is not desirable.

i'm also concerned about the commit time increasing over time as the
amount of data in the table increases.

is there a better approach?

thanks
tom

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] commit time

2009-10-20 Thread Tom Broadbent
i have a simple join table containing two ids from two other tables.  i have an 
index on each of the ids in the join table.

CREATE TABLE ContentWordItem (word_id INT, item_id INT);

CREATE INDEX idx_ContentWordItem_word ON ContentWordItem(word_id);  // 
index to perform fast queries by word_id

CREATE INDEX idx_ContentWordItem_item ON ContentWordItem(item_id);  // 
index to perform fast deletes by item_id

i have a large amount of data to insert into this join table on a slow embedded 
device.  i need to avoid locking this join table for more than a second or two 
at a time so that i can make queries on this table.

so here's the question:  how do i insert small chunks of data into this table 
w/o taking a hit each time i commit?

what i'm doing is:

* read a chunk of data from flat data file into vector of id pairs

* begin transaction

* loop thru vector of id pairs binding and inserting

* commit transaction

* repeat until data is exhausted

i'm seeing that the reading, binding, and inserting is very fast (300 ms) but 
the commit is taking upwards of 3 seconds.  when i increase my chunk size by a 
factor of 10 the insert doesn't appear to take 10x longer but the commit still 
takes upwards of 3 seconds.  the point is that the commit hit appears to be 
much greater than the insert hit but doesn't appear to scale directly.

it appears that the commit is updating the indexes and taking a long time.  is 
this a correct evaluation?

it also appears that the commit takes longer as the size of the table grows 
(i.e. the index is getting bigger).  is this expected?

what i'm worried about is that by reducing the chunk size (to avoid locking the 
db for a long time) i add a significant amount of time to the insert process 
because the commits are costing several seconds.  however, locking the db for a 
long time is not desirable.

i'm also concerned about the commit time increasing over time as the amount of 
data in the table increases.

is there a better approach?

thanks
tom

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] heap corruption?

2009-05-14 Thread Tom Broadbent
turns out that someone else was trampling the heap.  problem solved.

thanks
tom

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Dan
Sent: Monday, April 06, 2009 9:57 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] heap corruption?


On Apr 7, 2009, at 10:45 AM, Tom Broadbent wrote:

> hello -
>
> i'm using SQLite in an embedded application (WinCE) and i'm running
> into what appears to a heap corruption issue.  the access violation
> happens at the following line (in pcache1Fetch):
>
> for(pPage=pCache->apHash[h]; pPage&>iKey!=iKey; pPage=pPage-
> >pNext);
>
> mostly i'm curious what others have experienced relating to heap
> corruption and SQLite.  the bad address that is causing the AV
> appears to be UNICODE (ascii-range bytes, 0x00, ascii-range bytes,
> 0x00).  i realize this isn't much to work w/ but mostly i'm
> interested in a) others' experiences w/ this type of problem, b)
> hints to find the cause of the heap corruption, and c) tools that
> work w/ WinCE for detecting heap corruption (i've looked into
> CodeSnitch a bit).
>
> i'm running a vanilla version 3.6.6.2 w/ SQLITE_OS_WINCE,
> SQLITE_OMIT_LOCALTIME.

Compiling with both SQLITE_DEBUG and SQLITE_MEMDEBUG adds a bunch of
checks to each call to malloc() and free() that SQLite makes. If
SQLite is corrupting the heap itself it is likely an assert() will
fail during one of these checks. The stack trace might make it clearer
what is going on.

Also, you could try using the SQLITE_CONFIG_HEAP feature to configure
SQLite with its own private heap space by calling sqlite3_config()
before any other API:

   static u8 heap[1024*1024];
   sqlite3_config(heap, 1024*1024, 32);

If it is not SQLite corrupting the heap (it could be other parts of
the app), then the crash will likely occur in some other sub-system
when SQLite is configured this way.

Dan.


> thanks
> tom
>
> __
> This email has been scanned by the MessageLabs Email Security System.
> For more information please visit http://www.messagelabs.com/email
> __
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] heap corruption?

2009-04-08 Thread Tom Broadbent
thanks for the suggestions.  i have tried all of these suggestions and we're 
working on this.

is there a function in SQLite that i can call that will (in effect) verify the 
page cache?  what i'd like to do is pepper the code that is calling into SQLite 
w/ verify_page_cache to attempt to figure out _where_ the corruption is 
occurring.

btw - i expanded out the failing for loop and it appears that pNext is the 
culprit.

thanks
tom

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Dan
Sent: Monday, April 06, 2009 9:57 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] heap corruption?


On Apr 7, 2009, at 10:45 AM, Tom Broadbent wrote:

> hello -
>
> i'm using SQLite in an embedded application (WinCE) and i'm running
> into what appears to a heap corruption issue.  the access violation
> happens at the following line (in pcache1Fetch):
>
> for(pPage=pCache->apHash[h]; pPage&>iKey!=iKey; pPage=pPage-
> >pNext);
>
> mostly i'm curious what others have experienced relating to heap
> corruption and SQLite.  the bad address that is causing the AV
> appears to be UNICODE (ascii-range bytes, 0x00, ascii-range bytes,
> 0x00).  i realize this isn't much to work w/ but mostly i'm
> interested in a) others' experiences w/ this type of problem, b)
> hints to find the cause of the heap corruption, and c) tools that
> work w/ WinCE for detecting heap corruption (i've looked into
> CodeSnitch a bit).
>
> i'm running a vanilla version 3.6.6.2 w/ SQLITE_OS_WINCE,
> SQLITE_OMIT_LOCALTIME.

Compiling with both SQLITE_DEBUG and SQLITE_MEMDEBUG adds a bunch of
checks to each call to malloc() and free() that SQLite makes. If
SQLite is corrupting the heap itself it is likely an assert() will
fail during one of these checks. The stack trace might make it clearer
what is going on.

Also, you could try using the SQLITE_CONFIG_HEAP feature to configure
SQLite with its own private heap space by calling sqlite3_config()
before any other API:

   static u8 heap[1024*1024];
   sqlite3_config(heap, 1024*1024, 32);

If it is not SQLite corrupting the heap (it could be other parts of
the app), then the crash will likely occur in some other sub-system
when SQLite is configured this way.

Dan.


> thanks
> tom
>
> __
> This email has been scanned by the MessageLabs Email Security System.
> For more information please visit http://www.messagelabs.com/email
> __
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] heap corruption?

2009-04-06 Thread Tom Broadbent
hello -

i'm using SQLite in an embedded application (WinCE) and i'm running into what 
appears to a heap corruption issue.  the access violation happens at the 
following line (in pcache1Fetch):

for(pPage=pCache->apHash[h]; pPage&>iKey!=iKey; pPage=pPage->pNext);

mostly i'm curious what others have experienced relating to heap corruption and 
SQLite.  the bad address that is causing the AV appears to be UNICODE 
(ascii-range bytes, 0x00, ascii-range bytes, 0x00).  i realize this isn't much 
to work w/ but mostly i'm interested in a) others' experiences w/ this type of 
problem, b) hints to find the cause of the heap corruption, and c) tools that 
work w/ WinCE for detecting heap corruption (i've looked into CodeSnitch a bit).

i'm running a vanilla version 3.6.6.2 w/ SQLITE_OS_WINCE, SQLITE_OMIT_LOCALTIME.

thanks
tom

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users