Re: [sqlite] 64 bit soft heap limit feature request

2010-09-05 Thread Dave Toll
Thank you for the explanation Roger. My apologies to Mark for hijacking
the thread.

Cheers,
Dave.


-Original Message-
From: Roger Binns [mailto:rog...@rogerbinns.com] 
Sent: Sunday, September 05, 2010 12:07 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] 64 bit soft heap limit feature request

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/05/2010 12:15 AM, Dave Toll wrote:
 I do not
 understand why there is an assert towards the end of
 sqlite3_initialize() that fails on a 32-bit platform if SQLITE_DEBUG
is
 defined (code taken from SQLite 3.6.23.1):
[...]
 Am I missing something here? I define SQLITE_INT64_TYPE as long in
order
 to compile.

The integers that SQLite can store are up to 64 bit signed.  For example
'long long' on 32 bit platforms is typically 64 bit.  The section of
code you quoted looks at floating point values (double precision in C)
which also should be 64 bit.  It verifies that the 64 bit floating point
values and 64 bit integers are indeed 64 bit.  Then it verifies a way of
representing the floating point 'not a number' concept.  This all has
nothing to do with the prior messages in the thread :-)

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkyDCUEACgkQmOOfHg372QStfwCdEf3SuNqaoRmcNA9yg9dysnIo
BfsAoKn7OJscUIJspyVZxJYPlIJ+mRZV
=4lXy
-END PGP SIGNATURE-

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 64 bit soft heap limit feature request

2010-09-04 Thread Dave Toll
Hi Roger

While I broadly agree that SQLite is 32-bit software, I do not
understand why there is an assert towards the end of
sqlite3_initialize() that fails on a 32-bit platform if SQLITE_DEBUG is
defined (code taken from SQLite 3.6.23.1):


  /* The following is just a sanity check to make sure SQLite has
  ** been compiled correctly.  It is important to run this code, but
  ** we don't want to run it too often and soak up CPU cycles for no
  ** reason.  So we run it once during initialization.
  */
#ifndef NDEBUG
#ifndef SQLITE_OMIT_FLOATING_POINT
  /* This section of code's only output is via assert() statements. */
  if ( rc==SQLITE_OK ){
u64 x = (((u64)1)63)-1;
double y;
assert(sizeof(x)==8);/* - FAILS ON 32-BIT PLATFORM, NO
64-BIT TYPE AVAILABLE */
assert(sizeof(x)==sizeof(y));
memcpy(y, x, 8);
assert( sqlite3IsNaN(y) );
  }
#endif
#endif


Am I missing something here? I define SQLITE_INT64_TYPE as long in order
to compile.

Cheers,
Dave.


-Original Message-
From: Roger Binns [mailto:rog...@rogerbinns.com] 
Sent: Friday, September 03, 2010 7:05 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] 64 bit soft heap limit feature request

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/03/2010 02:26 AM, Mark Spychalla wrote:
 Having the option of using a value beyond 2GB would be
 useful for these newer systems for the very same reasons that having a
 soft heap limit of up to half the RAM on a system was useful on older
 systems that only supported 4GB of memory.

SQLite is pretty much 32 bit software, although it does work correctly
on 64 bit platforms and with large files.  This is because there are
many APIs where 'int' (32 bit on almost all 32 and 64 bit platforms) is
used.  size_t/ssize_t should have been used, although there may have
been some platforms early in SQLite's life that did not define it.

The API cannot simply be changed since that would would result in binary
incompatibility - something that can only be done for SQLite 4.  Every
relevant API could be version bumped (eg added a v2/v3 suffix) although
that will be fairly messy.

Or you could just live with it.  There is a 'Lite' in the name for a
reason :-)

I did do a survey of open source code when this issue was discussed
before and every example I found behaved as though the SQLite APIs took
size_t and not int.  A smarter cracker than me may figure out how to
exploit that.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkyBAGwACgkQmOOfHg372QRtLACfUkDkxhGD1RC6GihdWBSrzoIM
SUAAnjnlpu890zp5+h8jOV1Yrz5Pr6i+
=tep2
-END PGP SIGNATURE-

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] bug report: data race in multi-threaded execution ofsqlite3

2010-07-19 Thread Dave Toll
Hi Heechul

I have used this test code as well, and I found that you have to treat 
SQLITE_SCHEMA the same way as SQLITE_LOCKED - you should retry the operation 
that returned this error. I believe it indicates that the master table is 
locked by another thread due to a DDL command (e.g. CREATE TABLE, DROP TABLE). 
It does seem to contradict FAQ #15.

Anyone else, please feel free to correct me if I'm wrong - I use code that 
makes this assumption and I hope I haven't messed something up... I should 
mention that I am using shared cache mode.

Cheers,
Dave.


-Original Message-
From: Heechul [mailto:heechul@gmail.com] 
Sent: Sunday, July 18, 2010 10:15 AM
To: sqlite-users@sqlite.org
Subject: [sqlite] bug report: data race in multi-threaded execution ofsqlite3


Hi, 

I think there are data races that cause problems when running
multi-threaded applications. 

I downloaded sqlite-3.6.23.1 source distribution and tested a
multi-threaded program found in test/threadtest1.c (slightly modified
due to obvious errors in original code) on my Intel Core2Quad running
ubutu 10.04 (2.6.32-23-generic #37-Ubuntu SMP)

I compiled it as follows. 

$ gcc -DSQLITE_OMIT_LOAD_EXTENSION=1 -DTHREADSAFE=1 -g -o
threadtest-bugreport -lpthread  threadtest-bugreport.c ../sqlite3.c

The code simply create multiple threads each performs the following
sequences: 

 open db 
 create table 
 insert 100 entries 
 select
 drop table 

For every two thread work on the same datafile file but work on
different table. For example, thread 1 and thread 2 open the same
testdb-1. but thread 1 create t1 table and thread 2 create t2
table. 

Example of correct run is as follows: 

$ ./threadtest-bugreport 10
10 threads 
2.testdb-2: END
1.testdb-5: END
2.testdb-4: END
1.testdb-3: END
1.testdb-2: END
1.testdb-4: END
2.testdb-5: END
1.testdb-1: END
2.testdb-1: END
2.testdb-3: END

However, I got following intermittent errors

$ ./threadtest-bugreport 10
10 threads 
1.testdb-3: command failed: DROP TABLE t1; - database schema has changed
Exit with code 1

All operations are performed using sqlite3_exec() API, Therefore,
according to FAQ (q.15), I should not see SQLITE_SCHEMA error at least. 

Then, I used valgrind data-race detector (valgrind --tools=drd) and
found lots of data races as follows: 

valgrind --tool=drd ./threadtest-bugreport  2 
==23995== drd, a thread error detector
==23995== Copyright (C) 2006-2009, and GNU GPL'd, by Bart Van Assche.
==23995== Using Valgrind-3.6.0.SVN-Debian and LibVEX; rerun with -h for
copyright info
==23995== Command: ./threadtest-bugreport 2
==23995== 
2 threads 
==23995== Thread 2:
==23995== Conflicting store by thread 2 at 0x080c2058 size 4
==23995==at 0x804D3E3: pthreadMutexAlloc (sqlite3.c:15601)
==23995==by 0x804D270: sqlite3MutexAlloc (sqlite3.c:14918)
==23995==by 0x8052D06: unixEnterMutex (sqlite3.c:22329)
==23995==by 0x8054828: fillInUnixFile (sqlite3.c:25756)
==23995==by 0x805518B: unixOpen (sqlite3.c:26272)
==23995==by 0x804CC3A: sqlite3OsOpen (sqlite3.c:12604)
==23995==by 0x805ADD6: sqlite3PagerOpen (sqlite3.c:35419)
==23995==by 0x805F668: sqlite3BtreeOpen (sqlite3.c:40349)
==23995==by 0x80B5D4E: sqlite3BtreeFactory (sqlite3.c:97729)
==23995==by 0x80B65EF: openDatabase (sqlite3.c:98123)
==23995==by 0x80B67D2: sqlite3_open (sqlite3.c:98237)
==23995==by 0x8049698: worker_bee (threadtest-bugreport.c:205)
==23995== Allocation context: BSS section of
ul/Papers/cs523/dpthread/papps/sqlite/test/threadtest-bugreport
==23995== Other segment start (thread 3)
==23995==at 0x402D531: pthread_mutex_lock
(drd_pthread_intercepts.c:580)
==23995==by 0x804D419: pthreadMutexEnter (sqlite3.c:15660)
==23995==by 0x804D2AE: sqlite3_mutex_enter (sqlite3.c:14936)
==23995==by 0x8052D0E: unixEnterMutex (sqlite3.c:22329)
==23995==by 0x8054CB0: findReusableFd (sqlite3.c:26010)
==23995==by 0x8054E95: unixOpen (sqlite3.c:26119)
==23995==by 0x804CC3A: sqlite3OsOpen (sqlite3.c:12604)
==23995==by 0x805ADD6: sqlite3PagerOpen (sqlite3.c:35419)
==23995==by 0x805F668: sqlite3BtreeOpen (sqlite3.c:40349)
==23995==by 0x80B5D4E: sqlite3BtreeFactory (sqlite3.c:97729)
==23995==by 0x80B65EF: openDatabase (sqlite3.c:98123)
==23995==by 0x80B67D2: sqlite3_open (sqlite3.c:98237)
...
...
1.testdb-1: command failed: CREATE TABLE t1(a,b,c); - database schema
has changed
Exit with code 1
==23995== 
==23995== For counts of detected and suppressed errors, rerun with: -v
==23995== ERROR SUMMARY: 990 errors from 7 contexts (suppressed: 47 from
37)

I believe this is a bug. please check if it is the case. 

Best 

Heechul 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite without 64-bit integers?

2010-03-12 Thread Dave Toll
I have been successfully using SQLite on a 32-bit embedded system for
some time without any problems - you can add -DSQLITE_INT64_TYPE=long to
the compile line, no need to hack the source. The only issue I have seen
is that it is not possible to use -DSQLITE_DEBUG as it enables some
assert() calls that will always fail on a 32-bit system. This makes
debugging SQLite issues rather difficult, but does not seem to cause any
serious problems.

Cheers,
Dave.


-Original Message-
From: yamada [mailto:rafael.yam...@gmail.com] 
Sent: Thursday, March 11, 2010 8:59 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Sqlite without 64-bit integers?



Hi,

I am having the same problem here.
Did someone tested the solution gave by mattias? Are there any problems?


Mattias Ohlsson-3 wrote:
 
 I'm trying to use the SQLite database in an embedded system that does
not
 have 64-bit integers, i.e. long long, which is used in SQLite. I've
 managed
 to compile by definining the 64-bit types to 32-bit types:
 
 typedef long int sqlite_int64;
 #define UINT64_TYPE unsigned long int
 
 I've also run some simple tests and it seems to work. However, I'm a
bit
 worried that this might lead me into problems down the road. Has
anyone
 tried to do this before? What kind of problems might I run into?
 

-- 
View this message in context:
http://old.nabble.com/Sqlite-without-64-bit-integers--tp2337711p27866285
.html
Sent from the SQLite mailing list archive at Nabble.com.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite-users Digest, Vol 25, Issue 14

2010-01-14 Thread Dave Toll
Hi Jamie

Someone posted a problem recently related to SQLITE_CANTOPEN - I believe
the issue was that too many file descriptors were opened. Perhaps you
are using a file descriptor for every path you insert, and at some point
the OS returns an error when SQLite tries to get a file descriptor for
the journal file?

Cheers,
Dave.


-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of
sqlite-users-requ...@sqlite.org
Sent: Thursday, January 14, 2010 4:00 AM
To: sqlite-users@sqlite.org
Subject: sqlite-users Digest, Vol 25, Issue 14

Send sqlite-users mailing list submissions to
sqlite-users@sqlite.org

To subscribe or unsubscribe via the World Wide Web, visit
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
or, via email, send a message with subject or body 'help' to
sqlite-users-requ...@sqlite.org

You can reach the person managing the list at
sqlite-users-ow...@sqlite.org

When replying, please edit your Subject line so it is more specific than
Re: Contents of sqlite-users digest...
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Ticket 3810: SQLITE_ERROR on concurrent CREATE TEMP TRIGGER and sqlite3_close()

2009-10-21 Thread Dave Toll
I saw that DRH had added the following comment to the ticket:

2009-Oct-20 16:49:55 by drh: 
When ever a prepared statement runs, it first checks to see if the schema has 
changed. If the schema has changed, then the statement must be recompiled and 
retried. This loop of checking for schema changes and recompiling is run 5 
times, after which statement execution is abandoned with an error.
What appears to be happening in this ticket is that one thread keeps trying to 
run a prepared statement, but the schema keeps getting changed out from under 
it by another thread. Each time the original thread reprepares its statement, 
another thread changes the schema yet again before the first thread has an 
opportunity to run the statement.

If the attached test program is modified so that the statement is retried even 
if it returns SQLITE_ERROR (in addition to SQLITE_SCHEMA and other values) then 
it works as desired.

So, in other words, this appears to be an application problem.


Retrying on SQLITE_ERROR does indeed work in my test case. However I was under 
the impression that SQLITE_ERROR implied a serious error - should my 
application always retry on SQLITE_ERROR returned from sqlite3_exec() or 
sqlite3_step()? Wouldn't SQLITE_SCHEMA be the more appropriate error code in 
this case?

Cheers,
Dave.


-Original Message-
From: Roger Binns [mailto:rog...@rogerbinns.com]
Sent: Tue 10/20/2009 6:19 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Ticket 3810: SQLITE_ERROR on concurrent CREATE TEMP 
TRIGGER and sqlite3_close()
 
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dave Toll wrote:
 Could you point me to the test case you referred to?

Look near the bottom of the ticket where it lists the various checkins.
[6955] is the test case.  You can examine the current source in case that
has changed at http://sqlite.org/src/dir

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkrd8TYACgkQmOOfHg372QQmLgCffJjKBUwjZ6uND78KbHe7fuWq
kGsAoM993+BlkjyBMeACJBYQapF0lQBV
=/txR
-END PGP SIGNATURE-


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Ticket 3810: SQLITE_ERROR on concurrent CREATE TEMP TRIGGER and sqlite3_close()

2009-10-20 Thread Dave Toll
Hi Roger

Your point about the preprocessor definitions is well-taken - I use them to 
optimize for my environment but they are not pertinent to the ticket. I 
recompiled sqlite3.c without all the definitions mentioned in the ticket 
(except -DSQLITE_OS_UNIX=1) and was still able to reproduce the error with my 
test code.

Could you point me to the test case you referred to?

Cheers,
Dave.


-Original Message-
From: Roger Binns [mailto:rog...@rogerbinns.com]
Sent: Mon 10/19/2009 9:05 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Ticket 3810: SQLITE_ERROR on concurrent CREATE TEMP 
TRIGGER and sqlite3_close()
 
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dave Toll wrote:
 The race condition described in http://www.sqlite.org/cvstrac/tktview?tn=3810 
 is
 still present in SQLite 3.6.19 (amalgamation version running on Linux)

My biggest problem with the ticket is the huge list of omits and other
definitions at the bottom.  Are they really necessary and pertinent to the
ticket?  They definitely shouldn't be used with the amalgamation.

One of the checkins is a test case for the test suite.  Perhaps supplying an
improved test case would also help a lot, and prevent a regression in the
future.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkrcxnMACgkQmOOfHg372QSjTACffDE0K7pWwlyiVZx8Kl7GeVJ2
hVUAoKuBI7uN5x14opprsvrrML0fG8Ly
=oRps
-END PGP SIGNATURE-


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Ticket 3810: SQLITE_ERROR on concurrent CREATE TEMP TRIGGER and sqlite3_close()

2009-10-19 Thread Dave Toll
Hello list

I'm posting here as my ticket was recently closed. The race condition described 
in http://www.sqlite.org/cvstrac/tktview?tn=3810 is still present in SQLite 
3.6.19 (amalgamation version running on Linux) - the source code used to 
reproduce this issue has not changed and is still attached to the ticket. I'm 
happy to run more tests or supply more information if needed.

Summary: allowing CREATE TEMP TRIGGER on table A on one thread (with its own 
sqlite3* connection to database DB) to happen concurrently with sqlite3_close() 
on table B on another thread (with its own sqlite3* connection to database DB) 
can result in a subsequent DROP TRIGGER on table A failing with SQLITE_ERROR. 
Shared cache mode is enabled.

Cheers,
Dave.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite database on a certain high-performance SSD

2009-09-21 Thread Dave Toll
Hi Mark

I've had a little experience working with flash-based filesystems - I'd
recommend playing with the page_size and temp_store PRAGMAs (and of
course make sure you are using transactions to minimise the number of
file writes) to improve performance.

Cheers,
Dave.


-Original Message-
From: Mark [mailto:godef...@gmail.com] 
Sent: 21 September 2009 13:54
To: sqlite-users@sqlite.org
Subject: [sqlite] SQLite database on a certain high-performance SSD

I've currently got a loaner high-performance flash-based SSD (let's 
just say it doesn't connect to any disk controllers) that I'm testing 
for performance. I've run my application against it, and I believe that 
I should see numbers MUCH higher than I do. When I run my test app on a 
normal SATA 7200 RPM disk, I get a certain performance, and on the SSD

I get about 1/10th that speed. On an array of SAS disks I get numbers 
that are about 5x faster than my SATA disk, so my software itself isn't 
(I believe) the bottleneck.

I'm wondering if anyone has any tips for optimizing for this sort of 
storage solution.

Also, if anyone has any quick-and-dirty test setups they'd like me to 
run through on this rig, just let me know :)


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] I'm surprised what the author of web2py on web2py mailing list just said about sqlite

2009-09-09 Thread Dave Toll
quote
I apologize and I conclude my statement is not correct.

I have experienced this problem once (out of disk space and sqlite
database corruption) but I made the mistake of confusing correlation
with causality. Except in that occasion I never had any problem with
sqlite and I do use it on production. On my web site (web2py.com) I
only run sqlite.

Massimo

/quote


I wonder whether Massimo was referring to the possibility that a DB can
get into an unrecoverable state when you reach the disk space limit -
there's no room to create a rollback journal so you can't delete any
records or vacuum. Obviously this is only a concern for systems with
limited disk space.

It would be great to see a disk quota management system in a future
version of SQLite - or at least a way to associate sqlite3_file objects
with the DB connections that created them. I'm trying to implement a
quota system at the VFS level for an embedded platform, but it is
proving tricky to manage temp files...

Cheers,
Dave.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Determining which DB a journal or temp file belongs to

2009-09-02 Thread Dave Toll
Hello list

 

I'm using SQLite with a custom VFS on an embedded platform. I'd like to
place an upper (soft) limit on the amount of disk space used by a
database - i.e. the sum of the main database file size, journal file
size, and any associated temp file sizes. At the VFS level, is there any
way to determine which database an sqlite3_file* belongs to?

 

There was an interesting thread on this topic last year involving
sqlite3_file_control():

http://www.nabble.com/How-to-limit-the-size-of-the-database-files--td201
29606.html

 

However I think sqlite3_file_control() can only be used to access the
sqlite3_file* object corresponding to the main database file, not the
journal or temp files.

 

Cheers,

Dave.

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite Porting: Adding new VFS

2009-09-02 Thread Dave Toll
Hi Mwnn

If you don't require persistent storage and always call
sqlite3_open[_v2]() with the filename :memory: for an in-memory
database, then yes you should be able to assign pMethods to NULL. In
fact if you only use :memory:, you should be able to compile with
SQLITE_OMIT_DISKIO and you won't need a VFS at all. You may have to
check whether this compile option is supported by the amalgamation
source, if you use that.

Cheers,
Dave.


-Original Message-
From: mwnn [mailto:mwnn...@gmail.com] 
Sent: 02 September 2009 18:50
To: sqlite-users@sqlite.org
Subject: [sqlite] SQLite Porting: Adding new VFS

Hi all,
I am working on porting SQLite on to a proprietary Operating
system. The 
application that uses SQLite keeps the database in memory.. So in this
case 
can I assign NULL to pMethods pointer in sqlite3_file structure.

Regards,
mwnn 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] minimum cache_size for a database connection?

2009-06-17 Thread Dave Toll
What is the correct way to specify that SQLite should perform no caching
(or at least minimal caching)? PRAGMA cache_size=0 seems to have no
effect - either it continues to use the default, or tries to caches
everything.

Cheers,
Dave.


-Original Message-
From: D. Richard Hipp [mailto:d...@hwaci.com] 
Sent: 16 June 2009 17:33
To: General Discussion of SQLite Database
Subject: Re: [sqlite] minimum cache_size for a database connection?


On Jun 16, 2009, at 8:28 PM, Dave Toll wrote:

 Hello list



 I've noticed that if I call PRAGMA cache_size=0, my database rows  
 are
 still cached (very little disk access observed). If I call PRAGMA
 cache_size=1 I can see that there is very little caching (disk access
 observed). Is there a minimum allowed cache_size setting? Does
 cache_size=0 mean cache everything? I am using SQLite 3.6.13 with  
 the
 default page cache, and shared-cache enabled.


I think any  cache size suggestion less than 10 is ignored.  That used  
to be the case.  I don't know if it still is.

Note that this does *not* control your operating systems file cache.   
This is SQLite's user-space cache only.

If more pages than the cache allotment are required, the the cache  
size limit is automatically increased.


D. Richard Hipp
d...@hwaci.com




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] minimum cache_size for a database connection?

2009-06-17 Thread Dave Toll
Both PRAGMA cache_size=10 and PRAGMA cache_size=1 result in minimal
caching, which is great. What is the expected behaviour of PRAGMA
cache_size=0 for the default page cache? I'd like to know whether
setting cache_size=0 has some special significance, maybe it's a case I
need to prevent.

Many thanks for the replies,
Dave.


-Original Message-
From: D. Richard Hipp [mailto:d...@hwaci.com] 
Sent: 17 June 2009 10:08
To: General Discussion of SQLite Database
Subject: Re: [sqlite] minimum cache_size for a database connection?


On Jun 17, 2009, at 12:56 PM, Dave Toll wrote:

 What is the correct way to specify that SQLite should perform no  
 caching
 (or at least minimal caching)? PRAGMA cache_size=0 seems to have no
 effect - either it continues to use the default, or tries to caches
 everything.


Did you try PRAGMA cache_size=10?

You can use sqlite3_config() with SQLITE_CONFIG_PCACHE to install your  
very own cache manager that does anything you like.  See

 http://www.sqlite.org/c3ref/config.html
 http://www.sqlite.org/c3ref/c_config_getmalloc.html
 http://www.sqlite.org/c3ref/pcache_methods.html

Other resources:

 http://www.sqlite.org/malloc.html#pagecache
 http://www.sqlite.org/c3ref/soft_heap_limit.html

D. Richard Hipp
d...@hwaci.com




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] minimum cache_size for a database connection?

2009-06-16 Thread Dave Toll
Hello list

 

I've noticed that if I call PRAGMA cache_size=0, my database rows are
still cached (very little disk access observed). If I call PRAGMA
cache_size=1 I can see that there is very little caching (disk access
observed). Is there a minimum allowed cache_size setting? Does
cache_size=0 mean cache everything? I am using SQLite 3.6.13 with the
default page cache, and shared-cache enabled.

 

Cheers,

Dave.

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] TOCTOU pager bug in 3.6.14.2

2009-05-29 Thread Dave Toll
Could this be the same issue as this bug I reported a while ago?

http://www.sqlite.org/cvstrac/tktview?tn=3833

Cheers,
Dave.


-Original Message-
From: Brian Dantes [mailto:bdan...@coverity.com] 
Sent: 28 May 2009 16:01
To: Discussion of SQLite Database
Subject: [sqlite] TOCTOU pager bug in 3.6.14.2

See Ticket 3883

http://www.sqlite.org/cvstrac/tktview?tn=3883


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Concurrent sqlite3OsAccess() and sqlite3OsDelete() on journal file

2009-05-08 Thread Dave Toll
Hello list

 

I'm seeing an issue (on Linux build) where 2 different shared-cache DB
connections accessing the same database get confused about the status of
the journal file. One thread checks for the existence of the journal
with sqlite3OsAccess(), and at the same time the other deletes the
journal with sqlite3OsDelete(). Both calls are successful - one thread
thinks the journal exists and tries to open it read-only (resulting in
SQLITE_CANTOPEN), while the other thread has already deleted it. I
opened a ticket (http://www.sqlite.org/cvstrac/tktview?tn=3833) with all
the details for this issue against 3.6.13 and I can reproduce it against
3.6.14. I would expect these calls to be protected via a mutex or the
VFS file-locking mechanism - I've tried tracing this in the code but I'm
having difficulty following it. Can anyone shed any light on this? I'm
hoping I've missed something obvious.

 

A separate question - after a successful COMMIT or ROLLBACK, will the DB
connection ever keep a SHARED lock on the main DB file? Or should the DB
connection always fully unlock the main DB file?

 

Cheers,

Dave.

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLITE_CORRUPT - database disk image is malformed

2009-04-10 Thread Dave Toll
Hello list

I am sometimes seeing the error SQLITE_CORRUPT with message database disk 
image is malformed, when I attempt CREATE TEMP TRIGGER. My test spawns 5 
threads, each of which modifies and queries its own table in a common database 
(each thread uses a separate connection from sqlite3_open_v2() to the same 
filename with shared cache enabled). The error occurs when one thread attempts 
CREATE TEMP TRIGGER while another thread is closing its own database connection 
(I have proved this by protecting all calls except these with semaphores). 
Looking through the SQLite 3.6.12 amalgamation code I found the following 
comment in sqlite3VdbeExec():

  ** Although the mutex on the BtShared object that corresponds to
  ** database iDb (the database containing the sqlite_master table
  ** read by this instruction) is currently held, it is necessary to
  ** obtain the mutexes on all attached databases before checking if
  ** the schema of iDb is loaded. This is because, at the start of
  ** the sqlite3_exec() call below, SQLite will invoke 
  ** sqlite3BtreeEnterAll(). If all mutexes are not already held, the
  ** iDb mutex may be temporarily released to avoid deadlock. If 
  ** this happens, then some other thread may delete the in-memory 
  ** schema of database iDb before the SQL statement runs. The schema
  ** will not be reloaded becuase the db-init.busy flag is set. This
  ** can result in a no such table: sqlite_master or malformed
  ** database schema error being returned to the user.

Could this be the reason for the error I'm seeing? Is this a known issue or 
expected behaviour? Do I need to protect DDL calls on multiple connections to 
the same database, or this there some other way to protect against this 
possibility?

Cheers,
Dave.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] question about shared cache

2009-03-24 Thread Dave Toll
Hello list

 

I have been trying to understand how shared cache works under the
covers, and how a custom VFS should behave when shared cache is enabled
- can anyone confirm this assumption:

 

Journal file handles (opened with SQLITE_OPEN_EXCLUSIVE) are shared
between database connections opened on the same DB filename. Access to a
journal file handle is serialised through the BTShared mutex of the
owning database - there is no explicit locking performed on journal
files.

 

Cheers,

Dave.

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] shared cache and SQLITE_MISUSE on sqlite3_step()

2009-03-06 Thread Dave Toll
Without wishing to complicate the subject, I have found that retrying a
sqlite3_step() after receiving SQLITE_LOCKED works fine for me without
needing sqlite3_reset(). I do this in a multi-threaded test app (based
on some old SQLite test code) that I have left running with 10 threads
for a long time and did not encounter any errors. I was using
shared-cache.

Are there perhaps some cases where a reset is required on SQLITE_LOCKED,
and other cases where it can be interpreted the same as SQLITE_BUSY?

Cheers,
Dave.


-Original Message-
From: Hynes, Tom [mailto:tom.hy...@inin.com] 
Sent: 06 March 2009 05:10
To: General Discussion of SQLite Database
Subject: Re: [sqlite] shared cache and SQLITE_MISUSE on sqlite3_step()

Thanks Dan!

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan
Sent: Thursday, March 05, 2009 10:08 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] shared cache and SQLITE_MISUSE on sqlite3_step()


On Mar 6, 2009, at 2:53 AM, Hynes, Tom wrote:

 Dan,

 I am a little confused now about how to treat a SQLITE_LOCKED
 error.  Is this to be treated the same as SQLITE_BUSY (i.e. can
 retry, unless an explicit transaction is in effect and the statement
 is not a COMMIT, in which case should roll back), or should it be
 treated differently?  If the latter, should it generally be handled
 like most other db errors (e.g. SQLITE_CONSTRAINT, SQLITE_CANTOPEN,
 etc.), or is there some specific error handling for SQLITE_LOCKED
 that is recommended?

Usually when sqlite3_step() returns something other than SQLITE_ROW, the
statement needs to be reset (sqlite3_reset()) before execution
can be re-attempted. However, SQLITE_BUSY errors are an exception.
After SQLITE_BUSY is returned, you can call sqlite3_step() again
immediately.
This I either didn't know, or forgot.

An SQLITE_LOCKED error means there is something preventing execution
within the same process. For example a lock on a shared-cache table.
For some applications, for example if another thread may release the
lock shortly, it may be appropriate to retry the query. In other apps,
for example single-threaded apps, it may not.

To retry a query that has failed with an SQLITE_LOCKED error, you need
to reset the statement before trying sqlite3_step() again.

Dan.




 Thanks,

 Tom

 -Original Message-
 From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org
 ] On Behalf Of Dan
 Sent: Thursday, March 05, 2009 12:38 PM
 To: General Discussion of SQLite Database
 Subject: Re: [sqlite] shared cache and SQLITE_MISUSE on sqlite3_step()


 On Mar 6, 2009, at 12:22 AM, Marcus Grimm wrote:

 Dan,

 I'm not sure what you mean by technically the SQLITE_BUSY is
 also wrong. In the test program I get the SQLITE_BUSY quite
 often and by retrying the sqlite3_step() it will sooner or
 later succeed.

 Huh. My mistake.



 Thanks for confirming that it may happend only for the
 first call to sqlite3_step(). This is an important point and
 it makes it in fact easy to workaround.
 I've changed my test application accordingly and now it can
 run endless, I even increased now to 10 reader threads and 2 writer
 threads in
 parallel without making sqlite particular nervous. Wonderful... :-)

 Thanks again

 Marcus

 Dan wrote:
 On Mar 5, 2009, at 11:14 PM, Marcus Grimm wrote:

 Hi Dan,

 thank you!

 Yes! I was not aware that it is wrong to redo a step
 when it previously returned the LOCKED state.
 That was the reason for the (correct) MISSUSE return on
 the 2nd attempt.

 I think this is the main difference in my case between
 shared cache on/off: Without shared cache I never
 get this lock state and it just does the BUSY handling correctly.

 Technically the SQLITE_BUSY handling is also wrong, for the
 same reasons. But it's harder to hit a real SQLITE_BUSY than it
 is to hit an SQLITE_LOCKED in shared-cache mode.

 I'll change my locking handling accordingly, seems to be a problem
 of the very first step after a prepare, so that's should to be easy
 to reset...

 That's correct. An SQLITE_LOCKED or SQLITE_BUSY may only occur
 on the first call to sqlite3_step().

 Dan.



 Thanks again for your feedback

 Marcus

 Dan wrote:
 On Mar 5, 2009, at 9:57 PM, Marcus Grimm wrote:

 Hi again,

 I did a little test program that simulates the multi threading
 issue to understand better why I get a MISSUSE error even
 when sqlite_prepare returns no error.
 The test program will printout some errors on the console
 and exits in that case. If I don't use the shared cache it
 runs endless without any problem.

 The code is to long to be placed here, but I would be very happy
 if somebody could take a look. I've put it on a web server to
 download:

 http://www.exomio.de/sqlitethreadtest.c

 I'm using MS Visual Studio 2008, the program is started within a
 dosbox to see the error output. Using it on my dual core PC
 it allmost immediately stops with the 

Re: [sqlite] PRAGMA page_cache=x doesnt seem to have an effect

2009-02-25 Thread Dave Toll
Try sqlite3_soft_heap_limit() or use memsys3/memsys5 -
http://www.sqlite.org/malloc.html

Cheers,
Dave.


-Original Message-
From: ed [mailto:epdm...@gmail.com] 
Sent: 25 February 2009 10:50
To: General Discussion of SQLite Database
Subject: Re: [sqlite] PRAGMA page_cache=x doesnt seem to have an effect

Does anyone have any experience or examples of reducing sqlite memory
usage through the pragma cache_size, page_size satements?
I am misunderstanding the intended use of the pragmas?

thanks,
ed

On Fri, Feb 20, 2009 at 5:58 PM, ed epdm...@gmail.com wrote:
 Hello,
 I am using sqlite in an embedded environment with very limited memory.
 In an attempt to reduce memory consumed by sqlite, I am executing
 PRAGMA cache_size=new_size and PRAGMA default_cache_size=new_size upon
 opening all of my db's, which are both file based and in-memory. I
 have varied the value of new_size from 2000 (default), to 1000 to 500
 to 0 and none of these values seems to lower the amount of memory
 sqlite consumes.

 I have verified the new cache size settings were registered in the
 db's using PRAGMA cache_size.
 I then verify sqlite memory usage with the sqlite3_status() functions,
 which do not show a change in return value for
 SQLITE_STATUS_PAGECACHE_OVERFLOW or SQLITE_STATUS_MEMORY_USED (around
 7.9MB and 8.2MB respectively)

 I have not changed the default memory allocation (ie no scratch, cache
 or heap memory allocation at startup).
 I have configured sqlite3_soft_heap_limit(8192000).

 Any help in figuring out how to limit the cache usage of memory would
 be appreciated.

 --
 thanks,
 ed


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite cache question

2009-02-23 Thread Dave Toll
Hi Marcus

My understanding of SQLite caching is that in your scenario, 40MB is the
*maximum* cache memory that will be used per connection - this memory is
not immediately pre-allocated when you open a new connection. Using the
default memory allocator (memsys1) SQLite will allocate from the heap as
and when it needs to, and will simply stop caching if it cannot allocate
any more memory. Using sqlite3_soft_heap_limit() or alternative memory
allocators (memsys3 or memsys5) you can place an upper-bound on the
total memory that SQLite consumes. Have a look at
http://www.sqlite.org/malloc.html for more details.

Personally I can vouch for shared-cache mode, I've found it to be
essential for multiple DB connections on an embedded system with limited
memory.

Cheers,
Dave.


-Original Message-
From: Marcus Grimm [mailto:mgr...@medcom-online.de] 
Sent: 23 February 2009 06:07
To: General Discussion of SQLite Database
Subject: [sqlite] sqlite cache question

Hi all,

I tried to scan the list and doc pages to understand better
the sqlite's concept of caching but I'm still not really
sure how to change the cache parameters to get the best
performance:

Assuming I change the cache by pages size = 4096 and
cache_size = 1 sqlite will then use appx. 40 MB cache memory
per db connection.

I've learned in a recent reply here that sqlite will reload the cache
for every db connection when one of the connection did a write
to the db to avoid that the other connections have wrong cache data.
Doesn't this mean that increasing the cache size will actually slow
down the database read/write operation in a multi threaded / multi
connection
application, because now in the above example each connection will
reload 40MB prior going ahead ?

Also: I think it is dangerous to add to much cache in a multi/connection
application since each connection will allocate the cache and by having
e.g. 20
connections running it will allready allocate 1GB RAM, am I right ?
I've seen here much higher cache size recommendation (like 400MB) and
I'm wondering
how that can work in a multi connection application when I assume a
maximum
of 2GB RAM (on a Win-32 system).

I know that there is the shared cache mode but I got the impression that
this option is not very often used or recommended and I skip it for now.

--
In the man pages for the cache pragma I read:
If you are doing UPDATEs or DELETEs that change many rows
of a database and you do not mind if SQLite uses more memory, you can
increase... 

So actually the cache is only useful for sql statements that alter the
database and not for simple select statements ?
I guess the man page is not really correct, right ?
Otherwise I don't see why to change the cache size because write
operation do wait for disk IO anyhow and I think that will be overruled
by any cache issue.
--

I'm not saying that I have a performance problem, sqlite is doing great!
-- I just
want to setup my application and the default cache sizes in the best way
to fit in
also when the db file growths in the future.

Thank you

Marcus Grimm


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite_open_v2() vs sqlite_open16()

2009-02-10 Thread Dave Toll
Hi John

You can execute the following SQL after opening your DB (but before
inserting any rows):

PRAGMA encoding=UTF16

http://www.sqlite.org/pragma.html

Cheers,
Dave.


-Original Message-
From: John Horton [mailto:john_horto...@hotmail.com] 
Sent: 10 February 2009 01:30
To: sqlite-users@sqlite.org
Subject: [sqlite] sqlite_open_v2() vs sqlite_open16()


Hello,
 
I am new to sqlite, please could someone tell me how to open a UTF-16
databse using the sqlite_open_v2() API function ?
 
Thanks and regards,
 
John
_

Hotmail, Messenger, Photos  and more - all with the new Windows Live.
Get started! 
http://www.download.live.com/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] shared cache and journal files

2009-01-29 Thread Dave Toll
A little more detail on this issue:

It seems that my reading thread is not getting a shared lock on the
database before accessing the journal file. In pagerSharedLock() the
shared-cache Pager state is already PAGER_EXCLUSIVE because another
thread is writing to a different table in the same database - so a
shared lock is assumed and the function is a no-op. Does this imply some
misuse by my test code?

Cheers,
Dave.


-Original Message-
From: Dave Toll 
Sent: 28 January 2009 17:06
To: sqlite-users@sqlite.org
Subject: shared cache and journal files

Hello list

I have been using a modified version of threadtest1.c from the SQLite
test code to test the robustness of multi-threaded read/write
connections (SQLite 3.6.10) on an embedded platform (with my own VFS
implementation). The test code spawns 10 threads, each of which creates,
populates and queries its own table in one of 5 database files. No two
threads access the same table in the same database, but multiple threads
can access the same database. Every thread opens its own database
connection.

In shared cache mode (please correct me if I'm wrong), it seems that
every connection to the same database shares the same SQLite file handle
(sqlite3_file*). It is up to the platform's VFS to manage how this file
handle is used across multiple threads. Are journal file handles also
shared? I am seeing a situation where one thread starts a SELECT, which
triggers sqlite3OsRead() on a journal file that is still in use by
another thread writing to a different table in the same database (rough
callstack below in case it helps). Is this expected behaviour? As the
journal file was opened with SQLITE_OPEN_EXCLUSIVE, I had assumed that
it would not be accessed by multiple connections at the same time. Do I
have to handle concurrent access to journal files in my VFS?

sqlite3OsRead()
syncJournal()
pagerStress()
sqlite3PcacheFetch()
sqlite3PagerAcquire()
sqlite3BtreeGetPage()
getAndInitPage()
btreeCursor()
sqlite3BtreeCursor()
sqlite3VdbeExec()
sqlite3Step()
sqlite3_step()


Cheers,
Dave.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] shared cache and journal files

2009-01-29 Thread Dave Toll
It's funny how explaining an issue to a mailing list can make you spot
the problem yourself... there was indeed a problem with my test code,
database connections were not being opened for every thread as I
expected them to be.

I love SQLite - the bug is always in my code :)

Cheers,
Dave.


-Original Message-
From: Dave Toll 
Sent: 29 January 2009 12:27
To: sqlite-users@sqlite.org
Subject: RE: shared cache and journal files

A little more detail on this issue:

It seems that my reading thread is not getting a shared lock on the
database before accessing the journal file. In pagerSharedLock() the
shared-cache Pager state is already PAGER_EXCLUSIVE because another
thread is writing to a different table in the same database - so a
shared lock is assumed and the function is a no-op. Does this imply some
misuse by my test code?

Cheers,
Dave.


-Original Message-
From: Dave Toll 
Sent: 28 January 2009 17:06
To: sqlite-users@sqlite.org
Subject: shared cache and journal files

Hello list

I have been using a modified version of threadtest1.c from the SQLite
test code to test the robustness of multi-threaded read/write
connections (SQLite 3.6.10) on an embedded platform (with my own VFS
implementation). The test code spawns 10 threads, each of which creates,
populates and queries its own table in one of 5 database files. No two
threads access the same table in the same database, but multiple threads
can access the same database. Every thread opens its own database
connection.

In shared cache mode (please correct me if I'm wrong), it seems that
every connection to the same database shares the same SQLite file handle
(sqlite3_file*). It is up to the platform's VFS to manage how this file
handle is used across multiple threads. Are journal file handles also
shared? I am seeing a situation where one thread starts a SELECT, which
triggers sqlite3OsRead() on a journal file that is still in use by
another thread writing to a different table in the same database (rough
callstack below in case it helps). Is this expected behaviour? As the
journal file was opened with SQLITE_OPEN_EXCLUSIVE, I had assumed that
it would not be accessed by multiple connections at the same time. Do I
have to handle concurrent access to journal files in my VFS?

sqlite3OsRead()
syncJournal()
pagerStress()
sqlite3PcacheFetch()
sqlite3PagerAcquire()
sqlite3BtreeGetPage()
getAndInitPage()
btreeCursor()
sqlite3BtreeCursor()
sqlite3VdbeExec()
sqlite3Step()
sqlite3_step()


Cheers,
Dave.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] shared cache and journal files

2009-01-28 Thread Dave Toll
Hello list

I have been using a modified version of threadtest1.c from the SQLite test code 
to test the robustness of multi-threaded read/write connections (SQLite 3.6.10) 
on an embedded platform (with my own VFS implementation). The test code spawns 
10 threads, each of which creates, populates and queries its own table in one 
of 5 database files. No two threads access the same table in the same database, 
but multiple threads can access the same database. Every thread opens its own 
database connection.

In shared cache mode (please correct me if I'm wrong), it seems that every 
connection to the same database shares the same SQLite file handle 
(sqlite3_file*). It is up to the platform's VFS to manage how this file handle 
is used across multiple threads. Are journal file handles also shared? I am 
seeing a situation where one thread starts a SELECT, which triggers 
sqlite3OsRead() on a journal file that is still in use by another thread 
writing to a different table in the same database (rough callstack below in 
case it helps). Is this expected behaviour? As the journal file was opened with 
SQLITE_OPEN_EXCLUSIVE, I had assumed that it would not be accessed by multiple 
connections at the same time. Do I have to handle concurrent access to journal 
files in my VFS?

sqlite3OsRead()
syncJournal()
pagerStress()
sqlite3PcacheFetch()
sqlite3PagerAcquire()
sqlite3BtreeGetPage()
getAndInitPage()
btreeCursor()
sqlite3BtreeCursor()
sqlite3VdbeExec()
sqlite3Step()
sqlite3_step()


Cheers,
Dave.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITEBUSYTIMEOUT

2009-01-21 Thread Dave Toll
Some of your writes may be failing with SQLITE_BUSY or SQLITE_LOCKED if
others take too long to complete. If this is happening, you should retry
the write until it succeeds or increase your busy timeout (or both).

Cheers,
Dave.


-Original Message-
From: SATISH [mailto:g.satis...@gmail.com] 
Sent: 21 January 2009 01:38
To: 'General Discussion of SQLite Database'
Subject: [sqlite] SQLITEBUSYTIMEOUT

Hi!

 I am having 10 threads in my application by default my
application uses UTF-16, where I use the 10 threads to open the same
sqlite
file and write in 10 different tables simultaneously.these all threads
write
around 7000 records in 10 different tables.

While a thread opens the connection I will set the SQLite Busy timeout
to 60
seconds. Each thread will sleep until their turn comes and writes into
database.After Complete execution of 10 threads  . I have found 12
records
which were not written into sqlite file out of 7000 Records.

 

Can U Please tell me Why this loss of Information is occurring.

 

Regards

G.Satish.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] reading beyond end of file

2009-01-19 Thread Dave Toll
Returning SQLITE_IOERR_SHORT_READ in this case solves my problem.

Many thanks,
Dave.


-Original Message-
From: D. Richard Hipp [mailto:d...@hwaci.com] 
Sent: 16 January 2009 15:57
To: General Discussion of SQLite Database
Subject: Re: [sqlite] reading beyond end of file


On Jan 16, 2009, at 6:54 PM, D. Richard Hipp wrote:


 On Jan 16, 2009, at 6:43 PM, Noah Hart wrote:

 Just a random thought ... This is new code in pager.c,
 and if Pager-journalOff  is at the end of the file,
 then perhaps it could cause his problem.

   **
   ** To work around this, if the journal file does appear to
 contain
   ** a valid header following Pager.journalOff, then write a 0x00
   ** byte to the start of it to prevent it from being recognized.
   */
   rc = sqlite3OsRead(pPager.jfd, zMagic, 8, jrnlOff);



 Noah is correct.  There was a bug in my earlier assert statement.   
 The code above reads past the end of the journal file when you are  
 in persistent journaling mode.



Note that correct behavior of the xRead method of the VFS in this case  
is to return SQLITE_IOERR_SHORT_READ since it should be reading 0 bytes.

D. Richard Hipp
d...@hwaci.com




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] reading beyond end of file

2009-01-16 Thread Dave Toll
Hello list

 

I recently upgraded from SQLite 3.6.7 to 3.6.10 and I'm now noticing
some apparently undesirable behaviour. I'm running on an embedded system
with my own VFS implementation, and I see in my tests that SQLite is now
trying to read journal files at an offset beyond the end of the file. Is
anyone aware of any recent changes that could cause this scenario?
Should this case be handled within the VFS implementation?

 

Thanks,

Dave.

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] reading beyond end of file

2009-01-16 Thread Dave Toll
Hello Richard

I'm using a heavily-modified version of threadtest1.c from the SQLite
test suite to run on my embedded platform. I think Noah may be on to
something (thanks Noah!) - I looked at the stacktrace and found that the
specific sqlite3OsRead() call he mentioned was triggering my bad VFS
read.

Are previous versions of the amalgamation source available? I'd be happy
to try 3.6.8 and 3.6.9 to see at which version this issue appeared.

Cheers,
Dave.


-Original Message-
From: D. Richard Hipp [mailto:d...@hwaci.com] 
Sent: 16 January 2009 15:30
To: General Discussion of SQLite Database
Subject: Re: [sqlite] reading beyond end of file


On Jan 16, 2009, at 5:38 PM, Dave Toll wrote:

 Hello list



 I recently upgraded from SQLite 3.6.7 to 3.6.10 and I'm now noticing
 some apparently undesirable behaviour. I'm running on an embedded  
 system
 with my own VFS implementation, and I see in my tests that SQLite is  
 now
 trying to read journal files at an offset beyond the end of the  
 file. Is
 anyone aware of any recent changes that could cause this scenario?
 Should this case be handled within the VFS implementation?



For testing this, I added an assert() to the unix VFS which will fire  
if it ever tries to read past the end of a journal file.  Then I ran  
our test suite.  The assert() never fired.  So in our test suite, at  
least, SQLite never reads past the end of a a journal file.

I'm curious to know what you are doing to provoke it to read past the  
end of a journal file.

D. Richard Hipp
d...@hwaci.com


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite version 3.6.4

2008-10-15 Thread Dave Toll
I found a small compile error when SQLITE_MUTEX_NOOP is defined with the
amalgamation source:

sh4gcc -ansi -fsigned-char -mruntime=os21 -ml -c  -O0
-DSQLITE_OS_OTHER=1 -DSQLITE_32BIT_ROWID=1 -DSQLITE_INT64_TYPE=long
-DSQLITE_OMIT_AUTHORIZATION -DSQLITE_OMIT_AUTOINCREMENT
-DSQLITE_OMIT_AUTOINIT -DSQLITE_OMIT_AUTOVACUUM -DSQLITE_OMIT_CAST
-DSQLITE_OMIT_CHECK -DSQLITE_OMIT_COMPLETE -DSQLITE_OMIT_COMPOUND_SELECT
-DSQLITE_OMIT_CONFLICT_CLAUSE -DSQLITE_OMIT_DECLTYPE
-DSQLITE_OMIT_FLAG_PRAGMAS  -DSQLITE_OMIT_FOREIGN_KEY
-DSQLITE_OMIT_GET_TABLE -DSQLITE_OMIT_INCRBLOB
-DSQLITE_OMIT_LOAD_EXTENSION -DSQLITE_OMIT_PROGRESS_CALLBACK
-DSQLITE_OMIT_SCHEMA_VERSION_PRAGMAS -DSQLITE_OMIT_TCL_VARIABLE
-DSQLITE_OMIT_XFER_OPT -DUSE_RAWPPP_DL -DUSE_PSW_DL -DEVENTBRK_TEST
-DINCLUDE_TEST_POPUP -DOPENTV_FUSION -DFNS_REAL_SKEPTIC  -DDHCP
-DNO_PACKED_STRU -DNO_STASS -DRAWPPP_MODEM_TEST -DHTTP
-DSECURITY_PRESENT -DFUSION_CALLS_REDEFINE -DUSE_RAWSTK_MODEM_ERRORS
-DRAWPPP -DFUSION_CALLS_REDEFINE -DUSE_THE_XSOCKET -DDVB_SI
-DUSE_THE_HDMI -DUSE_THE_SQLDB -DUSE_THE_SQLITE -D__st40__
-Dinline=__inline -D__LITTLE_ENDIAN__ -D__LONG_INT__ -D__LONG_ALIGN__
-D__FLAT__ -D_ANSI_C_SOURCE -DKERNEL_HAS_ANSI -DUSE_THE_FONT16EXT
-DUSE_THE_EITS -DUSE_THE_HTTP -DUSE_THE_PPPLINK -DUSE_THE_XSOCKET
-DUSE_THE_ETHERNET -DUSE_THE_EVENTBRK -DUSE_THE_SVSCALE -DUSE_THE_IMAGE
-DUSE_THE_XYMAN -DUSE_THE_FEP -DUSE_THE_FEMGR -DUSE_THE_FONT16_GLUE
-DUSE_THE_NATAPP -DUSE_THE_MODEMSVC -DUSE_THE_USERPROF
-DUSE_THE_ATTRIBUTE -DUSE_THE_DEBTRACE -DUSE_THE_VOD -DUSE_THE_PVR
-DUSE_PVR_MPEG_MEDIUM -DUSE_CORE_2_DRIVERS  -DUSE_THE_SSLEXT
-DUSE_THE_OPENTV_SI -DUSE_THE_VKBENH -DUSE_THE_SCARTSW -DUSE_THE_APPSTOR
-DUSE_THE_EEFILSYS -DUSE_THE_FMFILE -DUSE_THE_RAMFSYS
-DUSE_THE_BC_SOURCE -DUSE_THE_RSB_LIB -DUSE_THE_RAWSTK -DUSE_THE_SCHED
-DUSE_THE_RESMAN -DUSE_THE_DSM -DUSE_THE_SLM -DUSE_THE_USB
-DUSE_THE_USB_COMM -DUSE_THE_USB_MSD -DUSE_THE_HDMI -DUSE_THE_OTHREAD
-DUSE_THE_GRAPHICS_JPEG_SUPPORT -DUSE_THE_GRAPHICS_PNG_SUPPORT
-DUSE_THE_SQLITE -DUSE_THE_SQLDB
-I/projects/OTV_OS/Core2/DEVELOP/mdal_db/opentv/external/opensrc/include
-I../src/include  -I. -g -DUSE_NO_ROMED_APPLICATIONS -osqlite3.o
/projects/OTV_OS/Core2/DEVELOP/mdal_db/opentv/external/opensrc/sqlite/sq
lite3.c
/projects/OTV_OS/Core2/DEVELOP/mdal_db/opentv/external/opensrc/sqlite/sq
lite3.c: In function `sqlite3DefaultMutex':
/projects/OTV_OS/Core2/DEVELOP/mdal_db/opentv/external/opensrc/sqlite/sq
lite3.c:14997: error: `noopMutexLeave' undeclared (first use in this
function)
/projects/OTV_OS/Core2/DEVELOP/mdal_db/opentv/external/opensrc/sqlite/sq
lite3.c:14997: error: (Each undeclared identifier is reported only once
/projects/OTV_OS/Core2/DEVELOP/mdal_db/opentv/external/opensrc/sqlite/sq
lite3.c:14997: error: for each function it appears in.)
/projects/OTV_OS/Core2/DEVELOP/mdal_db/opentv/external/opensrc/sqlite/sq
lite3.c:14997: error: initializer element is not constant
/projects/OTV_OS/Core2/DEVELOP/mdal_db/opentv/external/opensrc/sqlite/sq
lite3.c:14997: error: (near initialization for `sMutex.xMutexLeave')


Renaming debugMutexLeave() to noopMutexLeave() at line 14987 seems to
fix it.

Cheers,
Dave.


-Original Message-
From: D. Richard Hipp [mailto:[EMAIL PROTECTED] 
Sent: 15 October 2008 05:12
To: [EMAIL PROTECTED]; General Discussion of SQLite Database
Subject: [sqlite] SQLite version 3.6.4

SQLite version 3.6.4 is now available for download on the SQLite  
website:

  http://www..sqlite.org/download.html

SQLite version 3.6.4 is considered a stable release.  Upgrading from  
version 3.6.3 is optional.  For a summary of changes and enhancements  
that have occurred in version 3.6.4 visit

 http://www.sqlite.org/3_6_4.html

D. Richard Hipp
[EMAIL PROTECTED]




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite version 3.6.4

2008-10-15 Thread Dave Toll
Sorry, I should have stated that I am not defining SQLITE_MUTEX_NOOP
explicitly. SQLITE_MUTEX_NOOP is defined implicitly if you let
SQLITE_THREADSAFE default to 1 and define SQLITE_OS_OTHER=1 (i.e. do not
use SQLITE_OS_UNIX, SQLITE_OS_WIN or SQLITE_OS_OS2). I know this won't
affect many people, but it's an issue if you're building for an embedded
platform.

Cheers,
Dave.


-Original Message-
From: D. Richard Hipp [mailto:[EMAIL PROTECTED] 
Sent: 15 October 2008 10:52
To: General Discussion of SQLite Database
Subject: Re: [sqlite] SQLite version 3.6.4


On Oct 15, 2008, at 1:31 PM, Dave Toll wrote:

 I found a small compile error when SQLITE_MUTEX_NOOP is defined with  
 the
 amalgamation source:


SQLITE_MUTEX_NOOP is an internal symbol that gets set if and only if  
you define SQLITE_THREADSAFE=0.  You should not set SQLITE_MUTEX_NOOP  
yourself.  Instead set SQLITE_THREADSAFE=0 and let SQLITE_MUTEX_NOOP  
be set automatically.

D. Richard Hipp
[EMAIL PROTECTED]




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] assert() in sqlite3_initialize() when SQLITE_DEBUG is defined

2008-10-09 Thread Dave Toll
Hello list

 

I'd like to define SQLITE_DEBUG to help trace some problems I am seeing,
but this causes sqlite3_initialize() to assert on a 32-bit system. Does
SQLite support 32-bit systems?

 

  /* The following is just a sanity check to make sure SQLite has
  ** been compiled correctly.  It is important to run this code, but
  ** we don't want to run it too often and soak up CPU cycles for no
  ** reason.  So we run it once during initialization.
  */
#ifndef NDEBUG
  /* This section of code's only output is via assert() statements. */
  if ( rc==SQLITE_OK ){
u64 x = (((u64)1)63)-1;
double y;
assert(sizeof(x)==8);
assert(sizeof(x)==sizeof(y));
memcpy(y, x, 8);
assert( sqlite3IsNaN(y) );
  }
#endif

 

I noticed that similar tests elsewhere in the code check that the size
of u64 is either 8 or 4. I'm building SQLite 3.6.3 with the following
defines: -DSQLITE_INT64_TYPE=long -DSQLITE_32BIT_ROWID=1

 

Cheers,

Dave.

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] calculating cached page size for SQLITE_CONFIG_PAGECACHE

2008-10-06 Thread Dave Toll
Hello list

 

I wrote a function that attempts to calculate the cached page size for a
given page size, to be passed as the sz parameter for
sqlite3_config(SQLITE_CONFIG_PAGECACHE, buf, sz, N). It appears to work
so far (with 3.6.3) but I'd appreciate if anyone could go over it and
see if they can find any holes, or if it is likely to be broken by
future SQLite releases.

 

static size_t _get_cached_page_size(size_t page_size)

{

sqlite3 *db = 0;

int rc, size, size_hi;

unsigned char *mem;

 

/* ALLOCATE MEMORY FOR 3 DOUBLE-SIZE PAGES */

mem = malloc(page_size * 6);

 

/* CONFIGURE SQLITE PAGE CACHE FOR 3 DOUBLE-SIZE PAGES */

rc = sqlite3_config(SQLITE_CONFIG_PAGECACHE, mem, page_size * 2, 3);

 

/* INITIALISE SQLITE LIBRARY */

rc = sqlite3_initialize();

 

/* OPEN TEMP DATABASE */

rc = sqlite3_open(:memory:, db);

 

/* SET PAGE SIZE (calls sqlite3_exec() with PRAGMA
page_size=page_size) */

rc = _pragma_set_int(db, page_size, (int)page_size);

 

/* CREATE TEMP TABLE */

rc = sqlite3_exec(db, CREATE TABLE _temp (data int), 0, 0, 0);

 

/* GET LARGEST MEMORY SIZE REQUESTED TO PAGE CACHE */

rc = sqlite3_status(SQLITE_STATUS_PAGECACHE_SIZE, size, size_hi,
0);

 

/* SHUT DOWN SQLITE LIBRARY */

rc = sqlite3_shutdown();

 

/* FREE MEMORY AND RETURN SIZE ROUNDED UP TO NEAREST 8 BYTES */

free(mem);

size_hi = (size_hi + 7)  ~7;

return size_hi;

}

 

Cheers,

Dave.

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memsys3 vs memsys5

2008-09-10 Thread Dave Toll
Thanks Ralf, this info confirms my observations. I believe it should be
possible to write a function that initialises SQLite, calls
SQLITE_STATUS_PAGECACHE_SIZE to calculate the required page-cache
overhead for a given page size, and then reinitialises SQLite with the
new page-cache settings.

According to the documentation, the sz argument of
SQLITE_CONFIG_PAGECACHE must be a power of 2. If the page-cache has to
allow even a 1-byte overhead, this implies that page-cache pages must be
at least double the size of the actual pages in order to be usable.
However, I have found that I can get away with specifying sz as any
multiple of 4 and it seems to work ok. Is this luck, or a mistake in the
documentation?

Going back to memsys3 vs memsys5 - I have read the malloc page (I was
directed there after an earlier post), but I still don't see why the
difference in memory usage between memsys3 and memsys5 is so large - I
suspect I don't fully understand the power of 2, first fit algorithm.
My tests show that as page size increases, SQLite memory usage as a
percentage of DB file size decreases with memsys3. Using memsys5 and the
same data, SQLite memory used is consistently over double the DB file
size for any page size value. On an embedded system with limited memory,
I don't see the advantages of using memsys5 over memsys3.

One last point - I wasn't able to find a link to the malloc page from
anywhere in the SQLite site.

Cheers,
Dave Toll.


-Original Message-
From: Ralf Junker [mailto:[EMAIL PROTECTED] 
Sent: 09 September 2008 10:18
To: General Discussion of SQLite Database
Subject: Re: [sqlite] memsys3 vs memsys5


 Is there any
 reliable method to determine the minimum page-cache allocation size
 needed for a given page_size?


sqlite3_status(SQLITE_STATUS_PAGECACHE_SIZE, ...)

Unfortunately, there is extra overhead involved which is not handled by
the page cache but is allocated by malloc() if I understand correctly.

 From reading the sources (some weeks ago, the exact numbers might have
changed) I determined an extra of at least 152 bytes are allocated for
each page. 

Reference is nByteHdr in pager.c, plus an extra 4 byte (on 32 bit) for
the slot the page hash (probably a little more to accommodate for empty
slots). Interestingly, the page cache overhead size can differ between
SQLite versions and also depends on SQLITE_OMIT_... compiler settings.

152 bytes might not seem much but this is nearly 15% for a page cache
size of 1024 bytes, and nearly 30% for a 512 byte page cache. So you
safe overhead by choosing a larger page size.

Concluding, memory requirements are much higher than just 

  pragma page_size * pragma cache_size

This can sum up to lots of RAM when using page_size = 10 or more to
speed up indexing of very large databases.

Users should be aware of this when running SQLite on extreme (low RAM or
high size database) scenarios.

Ralf 


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] memsys3 vs memsys5

2008-09-09 Thread Dave Toll
Hello list

 

I'm using SQLite 3.6.2 on an embedded C platform, and I recently
switched from memsys3 to memsys5. SQLite memory usage
(sqlite3_memory_used()) for a fully-cached database nearly doubled from
4.7MB to 8.9MB (using the same DB file in each test). Is this
normal/expected?

 

I've also played with SQLITE_CONFIG_PAGECACHE, which seems to be more
appropriate for a limited-memory system where the amount of data will
often be close to the memory limit. I discovered that it was necessary
to allow an overhead on top of the page_size when specifying the
page-cache allocation size (sz argument), otherwise the page-cache would
not be used as the page-cache units were too small. Is there any
reliable method to determine the minimum page-cache allocation size
needed for a given page_size?

 

Cheers,

Dave Toll.

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Index memory usage in SQLite

2008-08-28 Thread Dave Toll
I hadn't found the malloc page - thanks for the link, it helps a lot.
I'll look into moving to memsys5 when I upgrade to SQLite 3.6.1. I
assumed I needed SQLITE_32BIT_ROWID as I have no native 64-bit integer
support available - maybe it's not necessary after all? I also defined
SQLITE_INT64_TYPE as long.

Assuming a disk-based DB is fully-cached (I specified 8000 pages), is it
normal for an index to (temporarily) use many more cache pages than
those taken up by the index data itself (as determined by the DB file
size increase)?

Cheers,
Dave.


-Original Message-
From: D. Richard Hipp [mailto:[EMAIL PROTECTED] 
Sent: 27 August 2008 06:55
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Index memory usage in SQLite


On Aug 26, 2008, at 1:13 PM, Dave Toll wrote:

 Hello all



 I'm running some general performance tests on SQLite 3.5.9 (embedded C
 platform), and I noticed that creating an index seems to use a lot  
 more
 memory than I expected.



 An index on an integer column (30495 rows) uses 1,011,560 bytes.

 DB file size increases by 311,296 bytes.

Perhaps the extra memory is used by additional cache space.  What is  
your cache size set to?  (The default is 2000 pages.)

Have you read http://www.sqlite.org/malloc.html yet?





 An index on a varchar column (average null-terminated text length 18
 bytes, 30495 rows) uses 2,180,040 bytes.

 DB file size increases by 856,064 bytes.



 I'm using the static memory allocator (mem3.c), page size 4096, and I
 compiled with SQLITE_32BIT_ROWID. I measured the difference in memory
 reported by sqlite3_memory_used(). Dropping the index does not return
 any of the memory used, and does not reduce the DB file size. Are  
 these
 results normal, or is some optimisation possible?


The mem3.c allocator is deprecated and will likely go away in a future  
release.  mem5.c is preferred.  The SQLITE_32BIT_ROWID option has not  
been tested by me since I can remember.  I have no idea if it really  
works in all cases or not.  For all I know it causes a memory leak.

Dropping a table or index from a database causes the freed disk space  
to go onto a freelist to be used on the next INSERT.  The space is not  
returned to the OS and the file size is not reduced.  To reduce the  
database file size run VACUUM or enable auto_vacuum.


D. Richard Hipp
[EMAIL PROTECTED]




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Index memory usage in SQLite

2008-08-27 Thread Dave Toll
Hello all

 

I'm running some general performance tests on SQLite 3.5.9 (embedded C
platform), and I noticed that creating an index seems to use a lot more
memory than I expected.

 

An index on an integer column (30495 rows) uses 1,011,560 bytes.

DB file size increases by 311,296 bytes.

 

An index on a varchar column (average null-terminated text length 18
bytes, 30495 rows) uses 2,180,040 bytes.

DB file size increases by 856,064 bytes.

 

I'm using the static memory allocator (mem3.c), page size 4096, and I
compiled with SQLITE_32BIT_ROWID. I measured the difference in memory
reported by sqlite3_memory_used(). Dropping the index does not return
any of the memory used, and does not reduce the DB file size. Are these
results normal, or is some optimisation possible?

 

Cheers,

Dave Toll.

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Index memory usage in SQLite

2008-08-27 Thread Dave Toll
Thanks Jay

I'm currently using an 8000 page cache, and as far as I know my DB
should have been fully-cached before I created the indices (it was built
from scratch and queried several times). Disk usage is pretty much as I
expected. Does creating an index use extra pages on top of the
fully-cached DB and the index representation itself? I'll have to read
up on VACUUM...

Cheers,
Dave.


-Original Message-
From: Jay A. Kreibich [mailto:[EMAIL PROTECTED] 
Sent: 27 August 2008 06:52
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Index memory usage in SQLite

On Tue, Aug 26, 2008 at 06:13:29PM +0100, Dave Toll scratched on the
wall:

 I'm running some general performance tests on SQLite 3.5.9 (embedded C
 platform), and I noticed that creating an index seems to use a lot
more
 memory than I expected.

  Creating an index requires reading and sorting the original table.
  It tends to beat the page cache fairly hard.  This is a known
  condition.

  In fact, the first bit of advice to increase the speed of index
  creation is to make the page cache larger.

 An index on an integer column (30495 rows) uses 1,011,560 bytes.
 
 DB file size increases by 311,296 bytes.

  Or about 10.2 bytes per item, which sounds pretty reasonable.

  An index in SQLite contains a full copy of the indexed data, so that
  plus a rowid reference back to the original table and other metadata
  makes 10 bytes sound about right, especially if most of the integers
  are smallish (SQLite uses var-length integers).

 An index on a varchar column (average null-terminated text length 18
 bytes, 30495 rows) uses 2,180,040 bytes.
 
 DB file size increases by 856,064 bytes.

  Same thing.  28 bytes per item sounds reasonable, especially if the
  average data item is 18 bytes.  Strings won't pack into pages quite
  as efficiently, so I'd expect a higher overhead.
 
 I'm using the static memory allocator (mem3.c), page size 4096, and I
 compiled with SQLITE_32BIT_ROWID. I measured the difference in memory
 reported by sqlite3_memory_used().

 Dropping the index does not return any of the memory used,

  If you're measuring the process memory usage, that's normal.  Free
  pages aren't returned to the OS.

  If you're measuring the allocator, chances are most of that memory is
  in the page cache, and will not be returned.  The default page size
  is 1K and the default cache size is 2K with about 0.5K of overhead
  per page, meaning the default system expects about 3MB worth of
  cache.  Neither of the examples you've given go over that, so unless
  you can figure out otherwise, I'd assume most of that is cache
  allocation.  The cache will grow until it hits its max size but won't
  be returned.

 and does not reduce the DB file size.

  That's normal.  As with memory systems, free pages are not released
  unless you vacuum the database file.

   -j

-- 
Jay A. Kreibich  J A Y  @  K R E I B I.C H 

'People who live in bamboo houses should not throw pandas.' Jesus said
that.
   - The Ninja, www.AskANinja.com, Special Delivery 10: Pop!Tech
2006

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users