Re: [sqlite] Functions embedded in SQL statements
[EMAIL PROTECTED] wrote: I'm thinking that all documentation is better placed in a wiki. Hmmm. The problem I see is that it makes access to the full documentation contingent on connectivity to a possibility ephemeral external site. Maybe the solution is to incorporate wiki snapshots into the distribution somehow (maybe by including an embedded server in the distribution, something like Sean Burke's Podwebserver that enables me to have the complete Perl documentation, in HTML format, sitting in a tab on my browser even if my Internet connection goes down). I guess I just have a problem with the everything you need to know is on someone else's machine attitude that seems to be spreading.
[sqlite] How to port SQLite to a uc/os-II OS with customized file system?
Hi,all I'm trying to bulid a database engine based on uc/os-II RTOS with my own customized file system(similar with FAT16, but not exactly the same). I find that SQLite is a good choice. I have read the SQLite source code for several days, but I still have no idea where I should begin with. SQLite is a db engine totally based on disk file. So I guess most of my future work should be to wrap my self-defined file system to have the same interface as needed in os_win.c. Is it correct? Could anyone give me some advice? Another question: Because my project will run in an embedded environment, so I have to take care of the RAM consumption. I have went through the mail list, but not found the description of minimum RAM usage. Could anyone tell me how much RAM is needed to run SQLite in an embedded environment? Thanks in advance! Best regards, Sarah
Re: [sqlite] SQLiteSpy 1.5.4 released
Hello C.Peachment, 1. SQLiteSpy is able to read and work with database files formatted by versions of Sqlite earlier than 3.3.6 but it also appears to change the database format rather than leave it as it was found. I use php version 5.1.4 including Sqlite version 3.2.8. There is a database format change in later versions of Sqlite that means php is NOT able to read these later versions. SQLiteSpy does not normally change the database format, just as SQLite does not. However, starting with SQLite 3.3.0, the file format changed slightly. This is no problem until you run the VACUUM command, which I suppose you did? VACUUM causes SQLite (and therefore SQLiteSpy as well) to rewrite the database and to update it to the latest file format. There are two solutions to your problem: 1. Don't run VACUUM from SQLite 3.3.0 or later ;=) 2. Issue PRAGMA legacy_file_format=ON; before running VACUUM. This instructs SQLite to use the pre 3.3.0 file format and maintain compatability with your 3.2.8 PHP version. Btw: The problem applies to all software using SQLite 3.3.0 or later, even to the SQLite command-line application. 2. After executing some sql against a database, the result set is displayed. If the database is closed after that, the result set stays on the screen when it might be less confusing if it disappeared as the database was closed. I felt it would be nice to keep the latest output in case a user wants to compare it with another databases still to be opened. But since some users are just as confused as you are, I will likely change SQLiteSpy to close all SQL statements and result grids if the database is closed. Regards, Ralf
Re: [sqlite] Functions embedded in SQL statements
[EMAIL PROTECTED] wrote: Eric Bohlman [EMAIL PROTECTED] wrote: [EMAIL PROTECTED] wrote: I'm thinking that all documentation is better placed in a wiki. Hmmm. The problem I see is that it makes access to the full documentation contingent on connectivity to a possibility ephemeral external site. Maybe the solution is to incorporate wiki snapshots into the distribution somehow (maybe by including an embedded server in the distribution, something like Sean Burke's Podwebserver that enables me to have the complete Perl documentation, in HTML format, sitting in a tab on my browser even if my Internet connection goes down). I guess I just have a problem with the everything you need to know is on someone else's machine attitude that seems to be spreading. This is a very reasonable point. The new SCM I (and others) are working on will allow you to quickly and easily download the entire source code/wiki/ticket repository and/or synchronize your local repository with remote changes. So ultimately this will not be an issue. But all that is still in the future. Perhaps you can suggest minor changes or enhancements to CVSTrac (http://www.cvstrac.org/) that will allow use to do something similar with the wiki right away - some way to download the whole wiki collection and run it locally. CVSTrac already includes its own web server, so that part is done for you already. -- D. Richard Hipp [EMAIL PROTECTED]
Re: [sqlite] Functions embedded in SQL statements
Ralf Junker [EMAIL PROTECTED] wrote: This is especially valuable for all all who need to work with older versions of the SQLite because their environment has not yet updated to the latest release. It can be very unfortunate for them to find updated information which might be incorrect or even wrong (at least partially) for their older releases. Another good point. And yet I am still very interested in moving toward wiki-style user editable documentation. I agree that the current arrangement is inadequate. Please offer suggestions on how it can be improved. Perhaps it would be sufficient to take snapshots of the wiki and ship that with each release? -- D. Richard Hipp [EMAIL PROTECTED]
[sqlite] Opening the database file for read on Windows XP
Hi all, I wonder if someone can guide me how to open for reading the database file of sqlite3 on WindowsXP, while the database is already opened by sqlite3 API. I have an application that uses sqlite3 API, and open the database file. While the file is opened (for reading) by sqlite3, I would like to copy the database file (so to have a copy of the file). I guess I need to place a shared lock on the file (like sqlite3 does when reading from the file). I wonder what is the recommended way of doing that. I saw that in http://www.sqlite.org/cvstrac/getfile/sqlite/src/os_win.cthere are some functions that might help me doing that, for example: int sqlite3WinOpenReadOnly(const char *zFilename, OsFile **pId) but those functions are internal to sqlite3 (so they are not exposed in the API). So I am not sure if it is a good idea to use them. I will appreciate any help in this matter, Thanks in advance, Ran
Re: [sqlite] SQLiteSpy 1.5.4 released
On Wed, 21 Jun 2006 09:24:35 +0200, Ralf Junker wrote: 1. SQLiteSpy is able to read and work with database files formatted by versions of Sqlite earlier than 3.3.6 but it also appears to change the database format rather than leave it as it was found. I use php version 5.1.4 including Sqlite version 3.2.8. There is a database format change in later versions of Sqlite that means php is NOT able to read these later versions. SQLiteSpy does not normally change the database format, just as SQLite does not. However, starting with SQLite 3.3.0, the file format changed slightly. This is no problem until you run the VACUUM command, which I suppose you did? VACUUM causes SQLite (and therefore SQLiteSpy as well) to rewrite the database and to update it to the latest file format. There are two solutions to your problem: 1. Don't run VACUUM from SQLite 3.3.0 or later ;=) 2. Issue PRAGMA legacy_file_format=ON; before running VACUUM. This instructs SQLite to use the pre 3.3.0 file format and maintain compatability with your 3.2.8 PHP version. Btw: The problem applies to all software using SQLite 3.3.0 or later, even to the SQLite command-line application. It appears that VACUUM is not the only SQL command to cause this behaviour. I had done a number of INSERT and UPDATE commands before closing the database and attempting to use it with PHP. The VACUUM command was not used. For the moment, my situation is not unique since other users of PHP will face it too. The pragma legacy_file_format=on instruction should solve the problem. Any possibility that it could be set as a configuration parameter for SQLiteSpy to avoid the need to issue the pragma command each time SQLiteSpy is used? I have searched the documentation for this pragma but without success. Is this a case of Star Wars : Use the source, Luke? :-)
Re: [sqlite] How to port SQLite to a uc/os-II OS with customized file system?
Ҷ�� uttered: Hi,all I'm trying to bulid a database engine based on uc/os-II RTOS with my own customized file system(similar with FAT16, but not exactly the same). I find that SQLite is a good choice. I have read the SQLite source code for several days, but I still have no idea where I should begin with. SQLite is a db engine totally based on disk file. So I guess most of my future work should be to wrap my self-defined file system to have the same interface as needed in os_win.c. Is it correct? Could anyone give me some advice? Implement the interface defined by struct IoMethod in os.h. Use the existing os_*.c as templates, yes. If you can provide a largely posix like interface to your file system, you could use os_unix.c largely unchanged. You might want to strip out the nightmare locking code, though:) Another question: Because my project will run in an embedded environment, so I have to take care of the RAM consumption. I have went through the mail list, but not found the description of minimum RAM usage. Could anyone tell me how much RAM is needed to run SQLite in an embedded environment? That depends. SQLite can have functionality conditionally compiled out, reducing it's size. The front page (http://www.sqlite.org/) proclaims: Small code footprint: less than 250KiB fully configured or less than 150KiB with optional features omitted. YMMV. Your best bet is to choose the features you don't need, compile up your target library, and measure the code size yourself. Thanks in advance! Best regards, Sarah Christian -- /\ \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL X - AGAINST MS ATTACHMENTS / \
[sqlite] Delete performance vs. Insert performance
Hi. I tested big deletes performance and big insert performance on a Windows CE device in various cache size configurations. ( 1MB, 100KB, 50KB ) Insert 3000 records performs within 23sec, 43sec and 61sec, with respect to each cache size configuration. However, delete 1000 records among 3000 records performs within about 0.4secs for all cache size configurations. Why does delete operation outperform insert operation? and how come the delete operation is independent of cache size? I think the updating indices costs are almost same in both insert and delete operations. Can anybody explain the reason? -- Insun Kang
Re: [sqlite] Opening the database file for read on Windows XP
On 6/21/06, Ran [EMAIL PROTECTED] wrote: I have an application that uses sqlite3 API, and open the database file. While the file is opened (for reading) by sqlite3, I would like to copy the database file (so to have a copy of the file). I guess I need to place a shared lock on the file (like sqlite3 does when reading from the file). I wonder what is the recommended way of doing that. Ran, I do this to replicate the database for backup. Aquire an immediate lock ( begin immediate ) then you can copy the file. Source code in C++ is downloadable from my sqlite support page (see the replicator). -- SqliteImporter and SqliteReplicator: Command line utilities for Sqlite http://www.reddawn.net/~jsprenkl/Sqlite Cthulhu Bucks! http://www.cthulhubucks.com
Re: [sqlite] Delete performance vs. Insert performance
inserts make sqlite write large amounts of data to disk, deletes make it (quickly) mark affected pages as unused. On 6/21/06, Insun Kang [EMAIL PROTECTED] wrote: Hi. I tested big deletes performance and big insert performance on a Windows CE device in various cache size configurations. ( 1MB, 100KB, 50KB ) Insert 3000 records performs within 23sec, 43sec and 61sec, with respect to each cache size configuration. However, delete 1000 records among 3000 records performs within about 0.4secs for all cache size configurations. Why does delete operation outperform insert operation? and how come the delete operation is independent of cache size? I think the updating indices costs are almost same in both insert and delete operations. Can anybody explain the reason? -- Insun Kang -- Cory Nelson http://www.int64.org
Re: [sqlite] Database locked. Any idea ?
I could not port my code quickly to Cygwin but a quick investigation shows me that the lock (wrFlag) is never set back to 1. Which API is supposed to do this ? Right now, I have the following stack trace: sqlite3_step sqlite3VbdeExec Cp_OP_OpenRead Sqlite3BtreeCursor wrFlag = 0; Any idea how my table should be unlocked ? Mario Hebert Legerity [EMAIL PROTECTED] 06/20/2006 03:07 PM Please respond to sqlite-users@sqlite.org To sqlite-users@sqlite.org cc Subject Re: [sqlite] Database locked. Any idea ? [EMAIL PROTECTED] wrote: Anyone has an idea of what may be wrong ? Oh yeah, I am running of my own port using uCos and a memory database. You say you are using a :memory: database and this is happening? Can you provide a test program running under Linux? -- D. Richard Hipp [EMAIL PROTECTED]
Re: [sqlite] Delete performance vs. Insert performance
Might be obvious but make sure you do all your inserts and deletes within a single transaction as I believe this has a big impact on performance. Might bring the insert and delete times closer. -- View this message in context: http://www.nabble.com/Delete-performance-vs.-Insert-performance-t1823679.html#a4976020 Sent from the SQLite forum at Nabble.com.
Re: [sqlite] How to port SQLite to a uc/os-II OS with customized file system?
Christian Smith wrote: Because my project will run in an embedded environment, so I have to take care of the RAM consumption. I have went through the mail list, but not found the description of minimum RAM usage. Could anyone tell me how much RAM is needed to run SQLite in an embedded environment? That depends. SQLite can have functionality conditionally compiled out, reducing it's size. The front page (http://www.sqlite.org/) proclaims: Small code footprint: less than 250KiB fully configured or less than 150KiB with optional features omitted. YMMV. Your best bet is to choose the features you don't need, compile up your target library, and measure the code size yourself. The original post asked about RAM requirements in an embedded environment. In embedded applications the code is normally stored in, and executed from, ROM of FLASH not RAM. The RAM requirements he is looking for are the stack space, heap space, and static variable storage requirements of SQLite when it is executing. I haven't seen any details on these requirements anywhere in the documentation, but I could have missed it. The stack and heap space requirements will depend upon the complexity of the queries you are executing. I would guess that the largest stack requirements would occur while compiling your queries, and the maximum heap requirements would occur while executing them (for things like temporary tables used by IN clauses etc). The static variable requirements should be fixed when your application is linked, and may depend upon which features you have compiled in or omitted. I think you will probably have to determine these values experimentally. HTH Dennis Cote
Re: [sqlite] Delete performance vs. Insert performance
Insun Kang uttered: Hi. I tested big deletes performance and big insert performance on a Windows CE device in various cache size configurations. ( 1MB, 100KB, 50KB ) Insert 3000 records performs within 23sec, 43sec and 61sec, with respect to each cache size configuration. However, delete 1000 records among 3000 records performs within about 0.4secs for all cache size configurations. Why does delete operation outperform insert operation? and how come the delete operation is independent of cache size? I think the updating indices costs are almost same in both insert and delete operations. Can anybody explain the reason? Delete without constraints is implemented using a DROP of the table followed by recreation of the table. Thus, all pages used by the table are simply marked as unused and added to the freelist. Adding to the free list will touch each page at most once, and thus caching adds no benefit (and has no loss for a smaller cache.) Inserting may touch each page multiple times, for such operations as rebalancing the tree. Therefore, a larger cache will be beneficial on inserts. Christian -- /\ \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL X - AGAINST MS ATTACHMENTS / \
Re: [sqlite] Opening the database file for read on Windows XP
Thanks for your reply. I know that I should lock the file before copying it, and the BEGIN IMMEDIATE is indeed a nice trick. However, I think I didn't explain my problem clearly. I would like to copy that file _without_ using the sqlite library (so using the windows API only). When I try to do that with: CreateFile(db_file, GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); I get error 0x20 - the process cannot access the file becuase it is beging used by other process. I have two processes - one is linked with sqlite, and the other (which does the copying) is not. I can lock using the first process, but I need to make the copy with the other, and without linking to sqlite (although sqlite is small, I find it a pity to link to it _only_ in order to do such a copy). Thanks again, Ran On 6/21/06, Jay Sprenkle [EMAIL PROTECTED] wrote: On 6/21/06, Ran [EMAIL PROTECTED] wrote: I have an application that uses sqlite3 API, and open the database file. While the file is opened (for reading) by sqlite3, I would like to copy the database file (so to have a copy of the file). I guess I need to place a shared lock on the file (like sqlite3 does when reading from the file). I wonder what is the recommended way of doing that. Ran, I do this to replicate the database for backup. Aquire an immediate lock ( begin immediate ) then you can copy the file. Source code in C++ is downloadable from my sqlite support page (see the replicator). -- SqliteImporter and SqliteReplicator: Command line utilities for Sqlite http://www.reddawn.net/~jsprenkl/Sqlite Cthulhu Bucks! http://www.cthulhubucks.com
[sqlite] Re: Opening the database file for read on Windows XP
Ran [EMAIL PROTECTED] wrote: Thanks for your reply. I know that I should lock the file before copying it, and the BEGIN IMMEDIATE is indeed a nice trick. However, I think I didn't explain my problem clearly. I would like to copy that file _without_ using the sqlite library (so using the windows API only). When I try to do that with: CreateFile(db_file, GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); I get error 0x20 - the process cannot access the file becuase it is beging used by other process. You want to enable sharing. Pass FILE_SHARE_READ | FILE_SHARE_WRITE as the third parameter. Igor Tandetnik
Re: [sqlite] Opening the database file for read on Windows XP
On 6/21/06, Ran [EMAIL PROTECTED] wrote: Thanks for your reply. I know that I should lock the file before copying it, and the BEGIN IMMEDIATE is indeed a nice trick. However, I think I didn't explain my problem clearly. I would like to copy that file _without_ using the sqlite library (so using the windows API only). I do the same with my code. I do a file copy once the lock is established. When I try to do that with: CreateFile(db_file, GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); I get error 0x20 - the process cannot access the file becuase it is beging used by other process. If you don't have an exclusive lock in Sqlite you should be able to get a read only copy through the file system. That's how I do it. I have two processes - one is linked with sqlite, and the other (which does the copying) is not. I can lock using the first process, but I need to make the copy with the other, and without linking to sqlite (although sqlite is small, I find it a pity to link to it _only_ in order to do such a copy). You'll need to have some communication between your processes so one knows that the other has locked the file and the copy can proceed. I wrote my replication program to be run from cron. It waits for a time trying to establish the correct lock, you might try the 'delay and retry' method.
Re: [sqlite] Delete performance vs. Insert performance
On 6/21/06, Christian Smith [EMAIL PROTECTED] Adding to the free list will touch each page at most once, and thus caching adds no benefit (and has no loss for a smaller cache.) Inserting may touch each page multiple times, for such operations as rebalancing the tree. Therefore, a larger cache will be beneficial on inserts. Does delete t rebalance the trees? or does it leave it until it's needed by an insert?
Re: [sqlite] Re: Opening the database file for read on Windows XP
Igor Tandetnik uttered: Ran [EMAIL PROTECTED] wrote: Thanks for your reply. I know that I should lock the file before copying it, and the BEGIN IMMEDIATE is indeed a nice trick. However, I think I didn't explain my problem clearly. I would like to copy that file _without_ using the sqlite library (so using the windows API only). When I try to do that with: CreateFile(db_file, GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); I get error 0x20 - the process cannot access the file becuase it is beging used by other process. You want to enable sharing. Pass FILE_SHARE_READ | FILE_SHARE_WRITE as the third parameter. Surely not FILE_SHARE_WRITE! You don't want other processes writing the database while you're copying it. Igor Tandetnik -- /\ \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL X - AGAINST MS ATTACHMENTS / \
Re: [sqlite] Re: Opening the database file for read on Windows XP
Ha! This made the trick. I tried only with FILE_SHARE_READ and this didn't work, but I didn't try with both of them. Thanks a lot! Ran On 6/21/06, Igor Tandetnik [EMAIL PROTECTED] wrote: Ran [EMAIL PROTECTED] wrote: Thanks for your reply. I know that I should lock the file before copying it, and the BEGIN IMMEDIATE is indeed a nice trick. However, I think I didn't explain my problem clearly. I would like to copy that file _without_ using the sqlite library (so using the windows API only). When I try to do that with: CreateFile(db_file, GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); I get error 0x20 - the process cannot access the file becuase it is beging used by other process. You want to enable sharing. Pass FILE_SHARE_READ | FILE_SHARE_WRITE as the third parameter. Igor Tandetnik
Re: [sqlite] Opening the database file for read on Windows XP
You'll need to have some communication between your processes so one knows that the other has locked the file and the copy can proceed. I wrote my replication program to be run from cron. It waits for a time trying to establish the correct lock, you might try the 'delay and retry' method. True. The two process are COM server and client so they do speak with each other and indeed one will lock using the BEGIN IMMEDIATE and the other will read and then the first will COMMIT. Thanks, Ran
Re: [sqlite] Delete performance vs. Insert performance
Insun Kang wrote: Hi. I tested big deletes performance and big insert performance on a Windows CE device in various cache size configurations. ( 1MB, 100KB, 50KB ) Insert 3000 records performs within 23sec, 43sec and 61sec, with respect to each cache size configuration. However, delete 1000 records among 3000 records performs within about 0.4secs for all cache size configurations. Why does delete operation outperform insert operation? and how come the delete operation is independent of cache size? I think the updating indices costs are almost same in both insert and delete operations. Can anybody explain the reason? Insun, I think that you must be doing your inserts as 3000 separate transactions. Try wrapping the inserts with a pair of begin transaction/commit transaction commands. I just wrote a test script to try doing something similar to what you described using the sqlite3 shell. I get times of 8 seconds for 64K + 128K inserts with one preexisting index, and 4 seconds to delete the first 64K records from that table. The script is: create table t1(a integer, b text); create table t2(a integer, b text); create table t3(a integer, b text); create table times( id integer primary key, t timestamp default current_time ); -- create 64K records in t1 begin; insert into t1 values( random(), 'test'); insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; -- create 128K records in t2 insert into t2 select random(), b from t1; insert into t2 select random(), b from t1; commit; -- index the random numbers create index t3_a on t3(a); -- insert all records into t3 with an index in one transaction insert into times(id) values(1); insert into t3 select * from t1 union select * from t2; insert into times(id) values(2); -- delete one third of records from t1 insert into times(id) values(3); delete from t3 where a in (select a from t1); insert into times(id) values(4); -- display times select 'inserts: ' || round(86400 * (julianday((select t from times where id = 2)) - julianday((select t from times where id = 1)) ), 0) || ' seconds'; select 'deletes: ' || round(86400 * (julianday((select t from times where id = 4)) - julianday((select t from times where id = 3)) ), 0) || ' seconds'; This script produces the following results. C:\Documents and Settings\DennisCsqlite3 test.db3 test.sql inserts: 8.0 seconds deletes: 4.0 seconds Note that my inserts into t3 all happen in one transaction. These rows all contain a random integer value and are indexed on that value. I'm inserting around 24K records per second, and deleting around 16K records per second. This is compared to you rates of about 49 inserts per second (with a 50KB cache size), and 2.5K deletes per second. I suspect that with a transaction around your inserts you will also see an insert rate around 3K records per second. HTH Dennis Cote
Re: [sqlite] How to port SQLite to a uc/os-II OS with customized file system?
At 10:19 21/06/2006, you wrote: Hi,all I'm trying to bulid a database engine based on uc/os-II RTOS with my own customized file system(similar with FAT16, but not exactly the same). I find that SQLite is a good choice. SQLite is the best choice, we have it running in a PPC440GX embedded system and run without problems. About the filesystem, don't know, we use a ram disk driver (2GB) with minimal filesystem, and make copies to a external hard disk via ethernet at fixed times. Also, disabled SQLite cache. Ram has lower consums and fills less space in our device than hard disks. Another question: Because my project will run in an embedded environment, so I have to take care of the RAM consumption. I have went through the mail list, but not found the description of minimum RAM usage. Could anyone tell me how much RAM is needed to run SQLite in an embedded environment? SQLite + OS code can be fitted in a 512 sram module (copied from flash at startup), and data in dram modules. There is no minimum ram because you can force SQLite to use hard disk as temp (http://www.sqlite.org/pragma.html), except for the cache which will take your page_size (1KB default) + 0.5 KB for each page cached. I suppouse that master tables are ram based also, don't know, but theirs size is minimal. HTH
[sqlite] Re: Re: Opening the database file for read on Windows XP
Christian Smith [EMAIL PROTECTED] wrote: Igor Tandetnik uttered: You want to enable sharing. Pass FILE_SHARE_READ | FILE_SHARE_WRITE as the third parameter. Surely not FILE_SHARE_WRITE! You don't want other processes writing the database while you're copying it. The file is already opened by another process for read/write, you must specify FILE_SHARE_WRITE otherwise you won't be able to open it. You have to impose a locking mechanism separate from that provided by the OS. Hence BEGIN IMMEDIATE command which guarantees that no writes will occur via SQLite. Igor Tandetnik
Re: [sqlite] Functions embedded in SQL statements
The new SCM I (and others) are working on will allow you to quickly and easily download the entire source code/wiki/ticket repository and/or synchronize your local repository with remote changes. So ultimately this will not be an issue. But all that is still in the future. Is this available publically anywhere? Another suggestion is you may want to look at MediaWiki. It has an extension mechanism that lets you provide handlers for anything between tags of your choice. Eg you could have the following in the source page: bug action=view id=347 /bug In your handler you can generate raw HTML, or you can generate wikitext markup. We are planning on moving all of our doc into MediaWiki for the BitPim project and then generate help docs from that. I'll even be able to make it in CHM for Windows, AppleHelp for Mac and plain html for Linux/Unix. Roger
Re: [sqlite] Functions embedded in SQL statements
D.Richard Hipp [EMAIL PROTECTED] wrote: Perhaps it would be sufficient to take snapshots of the wiki and ship that with each release? Yes, shipping wiki snapshots with each build should be fine. Even better: A versioned wiki - so users of legacy versions can edit and improve documentation for their version. This can be especially usefull if new versions pose problems to older releases, for example the PRAGMA legacy_file_format issue. Downloadable versions of documentation would of course be nice, too. Ralf
Re: [sqlite] SQLiteSpy 1.5.4 released
Hello C.Peachment, It appears that VACUUM is not the only SQL command to cause this behaviour. I had done a number of INSERT and UPDATE commands before closing the database and attempting to use it with PHP. The VACUUM command was not used. I am surprised to read this. I would be interested if you could reproduce this so I can see if SQLiteSpy is the evil or if it is an issue SQLite. If you can, please contact me via e-mail. For the moment, my situation is not unique since other users of PHP will face it too. The pragma legacy_file_format=on instruction should solve the problem. Any possibility that it could be set as a configuration parameter for SQLiteSpy to avoid the need to issue the pragma command each time SQLiteSpy is used? Sure, I will think about that. I have searched the documentation for this pragma but without success. Is this a case of Star Wars : Use the source, Luke? :-) Quite true. This has come up in the mailing list already, but maybe a SQLite documentation bug report would push it to success? Regards, Ralf
Re: [sqlite] Functions embedded in SQL statements
Roger Binns [EMAIL PROTECTED] wrote: The new SCM I (and others) are working on will allow you to quickly and easily download the entire source code/wiki/ticket repository and/or synchronize your local repository with remote changes. So ultimately this will not be an issue. But all that is still in the future. Is this available publically anywhere? No code. Just some notes. http://fossil-scm.hwaci.com/ Another suggestion is you may want to look at MediaWiki. It has an extension mechanism that lets you provide handlers for anything between tags of your choice. Eg you could have the following in the source page: bug action=view id=347 /bug In your handler you can generate raw HTML, or you can generate wikitext markup. We are planning on moving all of our doc into MediaWiki for the BitPim project and then generate help docs from that. I'll even be able to make it in CHM for Windows, AppleHelp for Mac and plain html for Linux/Unix. I want the user-interaction flexibility of MediaWiki, but I want radically simpler setup and administration (no webserver required, zero-configuration) and I also want to support software versioning and bug reports within the same system. I'm aiming for all the best features of MediaWiki, Trac/CVSTrac, and monotone, in a small zero-configuration package that is ridiculously simple to use. -- D. Richard Hipp [EMAIL PROTECTED]
Re: [sqlite] :memory: DB releasing storage
Any thoughts on this problem? I've been running with this patch and it seems to deal with the memory leak but no auto-vacuum. :(. Thanks, Rick Keiner On 6/9/06, Rick Keiner [EMAIL PROTECTED] wrote: There seems to be a bug in the memoryTruncate function in the pager. When it iterates through the pages I saw that there were page numbers of 0 where no action was being taken. As the number of deletes increased, the number of page number 0s increased. By making the following modification I no longer saw the memory leak. if( pPg-pgno=dbSize pPg-pgno != 0){ Everything seemed to be fine but I really don't understand enough about the pager to know what impact this may have. I'm only trying to observe what's going on. The auto_vacuum still didn't return storage to the system, though. Is a page number of 0 valid? hth, Rick Keiner On 6/7/06, Rick Keiner [EMAIL PROTECTED] wrote: Understood. It seems the pager code is more relevant. However, I am using the pragma. It works fine for a disk database. When the deletes are perfomed the database file returns back to the original size. I don't see any memory increase (just in case it was my code). The identical code is executed against a memory database and the memory continues to increase. After the deletes there is no decrease in storage and then the inserts are performed again and my storage usage increases. Delete and insert again and the storage continues to climb at the identical rate. If I double the number of inserts the storage increase doubles. This is what I am seeing. The number on the left is storage. Memory DB - Series of Inserts and deletes. The storage increases with each insert. 57.2M Insert 4000 69.9M Flush 69.9M Insert 4000 72.4M Flush 72.4M Insert 4000 75.7M Flush 75.6M Insert 4000 78.3M Flush 78.4M Insert 4000 80.9M Double the Records 57.1M Insert 8000 73.6M Flush 73.7M Insert 8000 78.8M Flush 78.9M Insert 8000 84.1M Flush 84.1M Insert 8000 89.4M Flush 89.5M Disk Database - File size - 8K 57.2M Insert 4000 File size - 1.9M 67.3M Flush - 9K 67.9M Insert 4000 - 1.9M 67.9M Flush - 9K 67.9M Insert 4000 - 1.9M 67.9M Flush - 9K Is there a minimum amount of storage that it will use until it starts to release storage? Thanks, Rick Keiner On 6/7/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Rick Keiner [EMAIL PROTECTED] wrote: Yes, apparently. The :memory: DB doesn't show the storage being reclaimed by the OS. In fact, after some more analysis, it's not reusing storage already allocated. :( Could that be? After checking the vacuum.c code. It's not doing anything for an in-memory DB. Would that be handled elsewhere? /* Get the full pathname of the database file and create a ** temporary filename in the same directory as the original file. */ pMain = db-aDb[0].pBt; zFilename = sqlite3BtreeGetFilename(pMain); assert( zFilename ); if( zFilename[0]=='\0' ){ /* The in-memory database. Do nothing. Return directly to avoid causing ** an error trying to DETACH the vacuum_db (which never got attached) ** in the exit-handler. */ return SQLITE_OK; } Auto-vacuum and VACUUM, in spite of similar names, are very different mechanisms. You enable autovacuum by issuing a pragma: PRAGMA auto_vacuum=ON; prior to creating any tables in your :memory: database. -- D. Richard Hipp [EMAIL PROTECTED]
[sqlite] [PATCH] Let the trigger(s) re*CURSE*. Up to certain depth.
Hello list, I've troubles finding developer list, so i'm posting here. Ever wanted recursive triggers in sqlite? This is somewhat brute wanna-be hack allowing sqlite triggers to recurse up to certain depth. I'd guess that trigger deletetion might be broken, as well as other things depending on triggers being non recursive. Also its utterly slow, even for precompiled statements .. I have no idea why. /me calling for enlightenment on this subject. patch against current cvs: http://hysteria.cz/sd/sqlite-trigrecurse.patch some people might find this extremely useful, - ./contrib? Have fun!
Re: [sqlite] compiling sqlite
Tried to compile 2.8.16 but got the below errors. Any suggestions? Thanks. C:\Windows CE Tools\wce211\PDT7200\Samples\sqlite\btree_rb.c(314) : warning C4013: 'printf' undefined; assuming extern returning int C:\Windows CE Tools\wce211\PDT7200\Samples\sqlite\vdbe.c(389) : warning C4013: 'getc' undefined; assuming extern returning int C:\Windows CE Tools\wce211\PDT7200\Samples\sqlite\vdbe.c(395) : warning C4013: 'ungetc' undefined; assuming extern returning int C:\Windows CE Tools\wce211\PDT7200\Samples\sqlite\vdbe.c(4140) : warning C4013: 'fopen' undefined; assuming extern returning int C:\Windows CE Tools\wce211\PDT7200\Samples\sqlite\vdbe.c(4140) : warning C4047: '=' : 'void *' differs in levels of indirection from 'int ' Linking... vdbe.obj : error LNK2001: unresolved external symbol _fopen vdbe.obj : error LNK2001: unresolved external symbol _ungetc vdbe.obj : error LNK2001: unresolved external symbol _getc X86Rel/sqlite.dll : fatal error LNK1120: 3 unresolved externals On Mon, 2006-06-19 at 10:22 +0100, Nuno Lucas wrote: On 6/19/06, Robin Cook [EMAIL PROTECTED] wrote: Is it possible to compile sqlite on embedded visual c 3.0 for wince 2.11 without MFC and TCL/TK on a Symbol PDT7242 barcode scanner. You should be able to use the 2.8.16 source in the sqlite-wince.sf.net site (maybe with one or other tweak as it's been a while since I last tested it on 2.11). You will not have file locking support, but I guess you will not need it. As for Tcl/Tk, you are on your own, sorry. I have tried with the already preprocessed windows source but keep getting error with missing header files like assert.h etc. The port includes dummy assert.h and time.h files exactly because of this. I am unable to use the dll as it requires msvcrt which is not available on it. You can't use a windows DLL on CE, even if msvcrt was available. You will need to compile your own. Best regards, ~Nuno Lucas Thanks Robin Cook signature.asc Description: This is a digitally signed message part
Re: [sqlite] Delete performance vs. Insert performance
Hi guys. Thank all of you for reply. I think I need to inform more details about my test-bed to you. The machine is iPAQ h5550 (CPU speed is about 400MHz). Cache size = 500 pages * 2KB = 1MB Cache size = 50 pages * 2KB = 100KB Cache size = 25 pages * 2KB = 50KB The test code is written in c code and the flow is like this. - The data table has 11 columns and 5 single-column indices and 5 multi-column indices. - insert 3000 recs within a single transaction. (begin / insert 3000 recs / commit) INSERT INTO MDS VALUES (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11) * I use sqlite3_prepare() and sqlite3_bind_xxx() functions. The data bind time can be ignored because all data to be binded are already loaded in memory before the transaction begins. - delete 1000 recs among 3000 recs within a single transaction (begin / delete 1000 recs / commit ) DELETE FROM T WHERE eleven LIKE :1 * I do not think sqlite3 uses drop rebuild scheme for this SQL statement. One possible scheme that I guess is sqlite3 removes recs only from data table not from all indices. (lazy update for indices) But I am not certain. Any ideas? Thank you in advance. --- create table index create table T ( one text NOT NULL, two text, tree int, four text, five int, six int NOT NULL, seven int, eight int, nineint, ten int UNIQUE, eleven text ) create index i1 ~ i5 (single column indices) create index mi1 ~ mi5 (multi column indices : consists of 2 or 3 columns) On 6/22/06, Dennis Cote [EMAIL PROTECTED] wrote: Insun Kang wrote: Hi. I tested big deletes performance and big insert performance on a Windows CE device in various cache size configurations. ( 1MB, 100KB, 50KB ) Insert 3000 records performs within 23sec, 43sec and 61sec, with respect to each cache size configuration. However, delete 1000 records among 3000 records performs within about 0.4secs for all cache size configurations. Why does delete operation outperform insert operation? and how come the delete operation is independent of cache size? I think the updating indices costs are almost same in both insert and delete operations. Can anybody explain the reason? Insun, I think that you must be doing your inserts as 3000 separate transactions. Try wrapping the inserts with a pair of begin transaction/commit transaction commands. I just wrote a test script to try doing something similar to what you described using the sqlite3 shell. I get times of 8 seconds for 64K + 128K inserts with one preexisting index, and 4 seconds to delete the first 64K records from that table. The script is: create table t1(a integer, b text); create table t2(a integer, b text); create table t3(a integer, b text); create table times( id integer primary key, t timestamp default current_time ); -- create 64K records in t1 begin; insert into t1 values( random(), 'test'); insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; insert into t1 select random(), b from t1; -- create 128K records in t2 insert into t2 select random(), b from t1; insert into t2 select random(), b from t1; commit; -- index the random numbers create index t3_a on t3(a); -- insert all records into t3 with an index in one transaction insert into times(id) values(1); insert into t3 select * from t1 union select * from t2; insert into times(id) values(2); -- delete one third of records from t1 insert into times(id) values(3); delete from t3 where a in (select a from t1); insert into times(id) values(4); -- display times select 'inserts: ' || round(86400 * (julianday((select t from times where id = 2)) - julianday((select t from times where id = 1)) ), 0) || ' seconds'; select 'deletes: ' || round(86400 * (julianday((select t from times where id = 4)) - julianday((select t from times where id = 3)) ), 0) || ' seconds'; This script produces the following results. C:\Documents and Settings\DennisCsqlite3 test.db3 test.sql inserts: 8.0 seconds deletes: 4.0 seconds Note that my inserts into t3 all happen in