Re: [sqlite] Memory Usage
Most probably it will be a memory leak in your program. We must release the dynamically allocated memory ourselves. So check whether you are forgetting to do that. Most probably that leak will be happening inside some loops or repeatedly calling functions. On Fri, 2006-10-27 at 17:00 +0100, Ben Clewett wrote: > Dear Sqlite, > > I very much enjoy using Sqlite, it is extremely useful. > > I have a memory usage query. > > I am linking to libsqlite3.so.0.8.6. After calling sqlite3_open(...) I > find my programs data memory jumps by 16392 Kb. > > This seems a lot. The database I am opening is only 26K in size. > > I have a similar process opening about 90 times. This obviously > consumes a very large amount of memory, 1.4G with 90 processes. > > May I ask if this is what would be expected, and whether there is > anything I can do to lower this loading? > > Thanks for your help, > > Ben. > > > > - > To unsubscribe, send email to [EMAIL PROTECTED] > - __ Scanned and protected by Email scanner - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Memory Usage
At 18:00 27/10/2006, you wrote: Dear Sqlite, I very much enjoy using Sqlite, it is extremely useful. I have a memory usage query. I am linking to libsqlite3.so.0.8.6. After calling sqlite3_open(...) I find my programs data memory jumps by 16392 Kb. This seems a lot. The database I am opening is only 26K in size. I have a similar process opening about 90 times. This obviously consumes a very large amount of memory, 1.4G with 90 processes. May I ask if this is what would be expected, and whether there is anything I can do to lower this loading? Thanks for your help, Ben. Perhaps SQLite cache is taking that memory. Don't know if the cache is pre-allocated or it takes memory as it needs. Also, if your temp is memory any temporal table or similar takes more memory. HTH --- God is Real, but it can be declared as integer also... - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Re: Regarding sqlite3_exec
Why don't you design the table with a unique row ID, stored in an integer field, then fetch a list of those ID numbers? For 5000 rows, assuming you store them in you application as 4 byte longs, that's about 19 k of memory. Counting that result as you receive it isn't that difficult. If it takes a long time (it probably won't) you can do it in another thread and update the interface as appropriate. I'm not seeing a downside here. Isaac On 10/26/06, Da Martian <[EMAIL PROTECTED]> wrote: No there isnt, but RDBM systems are a generalised data retrieval mechanism. As such they suffer from that generality. Dont get me wrong, RDBM systems are appropriate for 95% of all data requirements I have had to deal with and I would never dream of trying to write one from scratch, nor can I imagine a world without them. However certain applications (Weather data, Gnome data, Large indices (like google)) require using somethng designed specifically for that purpose. If you customise data retrieval (and particluar your sorting/indcies/access path) you can leave rdbms in the dust in terms of performance. All I have read about google, suggests they do exactly this. Although I must point out, I dont actually know anything about google with any certainty. Just what has "leaked" out over the years on the rumour mill. But designiing my own "google" like indices (on a smaller scale of coure) and some specialisted weather stuff, it neccessary to throw away the rdbms and do it yourself. For a goole query for instance, they know they will get a list of 1 or more words. They also know they will only ever search through the index of words. They dont have other data types, records or tables. Why go through all the hassles of compiling SQLs, and that generic overhead when your application will only ever do one thing? You can just make an API like this "search(wordlist): Resultset. " You immediatly save yourself complexity and processing time. Then for large indices you will know your data set, so instead of using a std BTree you would use a more appropraite DS possible with skip lists etc.. . As for performing a database search twice, this whole thread has shown, that sometimes the you have to :-) S On 10/25/06, John Stanton <[EMAIL PROTECTED]> wrote: > > There is no magic in data retrieval. Google use the same physical laws > as us ordinary mortals. > > I see no reason to ever perform a dataabase search twice. > -- Isaac Raway Entia non sunt multiplicanda praeter necessitatem. http://blueapples.org - blog http://stonenotes.com - personal knowledge management - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Memory Usage
On 10/27/06, Ben Clewett <[EMAIL PROTECTED]> wrote: I am linking to libsqlite3.so.0.8.6. After calling sqlite3_open(...) I find my programs data memory jumps by 16392 Kb. This seems a lot. The database I am opening is only 26K in size. There are many different ways of memory "jump" (like linking with a lot of dynamic libraries), but one thing is certain: sqlite is not responsible for that. I have a similar process opening about 90 times. This obviously consumes a very large amount of memory, 1.4G with 90 processes. It's a memory leak in your program, for sure. Run some memory leak tool (e.g. valgrind). May I ask if this is what would be expected, and whether there is anything I can do to lower this loading? Unless you decided to mess with sqlite internals, it's not expected in any way. Thanks for your help, Ben. Best regards, ~Nuno Lucas - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Memory Usage
Dear Sqlite, I very much enjoy using Sqlite, it is extremely useful. I have a memory usage query. I am linking to libsqlite3.so.0.8.6. After calling sqlite3_open(...) I find my programs data memory jumps by 16392 Kb. This seems a lot. The database I am opening is only 26K in size. I have a similar process opening about 90 times. This obviously consumes a very large amount of memory, 1.4G with 90 processes. May I ask if this is what would be expected, and whether there is anything I can do to lower this loading? Thanks for your help, Ben. - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] serious performance problems with indexes
Hi DRH, A mailing list post by you outlines a similar problem that I am seeing: http://www.mail-archive.com/sqlite-users@sqlite.org/msg15913.html Have you given any thought to this index page locality matter? Perhaps something like: PRAGMA reserve_pages_for_indexes = 5 Whereby a chunk of file space can be exclusively reserved for index use to prevent index fragmentation? I realize that this goes against the zero-admin principle of SQLite, but such a feature might lead to dramatic improvements in bulk insert and cold-cache queries. thanks. - Original Message From: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> To: sqlite-users@sqlite.org Sent: Friday, October 27, 2006 9:31:51 AM Subject: Re: [sqlite] serious performance problems with indexes Peter De Rijk <[EMAIL PROTECTED]> wrote: > I have run into a serious performance problem with tables with many rows. > The problem only occurs on tables with an index > The time needed for an insert into a table with an index is dependend on the > number of rows. I have not formally checked, but from my tests it looks like > an exponential dependence. This of course means that while sqlite is very > fast on smaller datasets, it is slow on larger data sets and becomes unusable > on large datasets (million of rows). The insert behaviour is normal on non > indexed tables, but obviously queries are a problem then. > Is this index behaviour normal/expected for sqlite, or can this be solved? > When a table is indexed, INSERT performance is logorithmic in the number of rows in the table and linear in the number of indices. This is because entries have to be inserted into the index in sorted order (otherwise it wouldn't be an index). And each insert thus requires a binary search. If your index becomes very large so that it no longer fits in your disk cache, then the binary search can involve a lot of disk I/O which can make things really slow. The usual work-around here is to keep a much smaller staging table into which you do your inserts and then periodically merge the staging table into the main table. This makes your queries more complex (and slower) since you are now having to search multiple tables. But it does speed up inserts. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Benefits to use of "NOT NULL" where possible?
An SQLite NOT NULL issue related to primary keys that may be of interest: http://www.mail-archive.com/sqlite-users@sqlite.org/msg17947.html - Original Message From: Scott Hess <[EMAIL PROTECTED]> To: sqlite-users@sqlite.org Sent: Friday, October 27, 2006 3:00:46 AM Subject: [sqlite] Benefits to use of "NOT NULL" where possible? In some database systems, it can be beneficial to use "NOT NULL" as much as possible when defining tables. It usually allows for a slightly tighter storage encoding, and also allows some optimizations to occur. AFAICT, in sqlite it only seems important for constraining the data appropriately. For a column which does not contain null data, it looks like the storage doesn't change between when it's defined with NOT NULL or without (I'm guessing this is basically because of manifest typing). Also, the EXPLAIN output doesn't change at all when I add or remove the NOT NULL. So it would seem that the actual performance would never change, nor would the storage footprint, so long as you don't ever attempt to insert null data (in which case the NOT NULL version would convert it to an appropriate default value, which might cause changes to storage or performance). Anyone got holes to shoot in my reasoning? Thanks, scott [It's sort of hard to search for emails to sqlite-users about "NOT NULL". There are many references to "NOT NULL" in table examples people include with their questions :-).] - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] serious performance problems with indexes
Peter De Rijk <[EMAIL PROTECTED]> wrote: > I have run into a serious performance problem with tables with many rows. > The problem only occurs on tables with an index > The time needed for an insert into a table with an index is dependend on the > number of rows. I have not formally checked, but from my tests it looks like > an exponential dependence. This of course means that while sqlite is very > fast on smaller datasets, it is slow on larger data sets and becomes unusable > on large datasets (million of rows). The insert behaviour is normal on non > indexed tables, but obviously queries are a problem then. > Is this index behaviour normal/expected for sqlite, or can this be solved? > When a table is indexed, INSERT performance is logorithmic in the number of rows in the table and linear in the number of indices. This is because entries have to be inserted into the index in sorted order (otherwise it wouldn't be an index). And each insert thus requires a binary search. If your index becomes very large so that it no longer fits in your disk cache, then the binary search can involve a lot of disk I/O which can make things really slow. The usual work-around here is to keep a much smaller staging table into which you do your inserts and then periodically merge the staging table into the main table. This makes your queries more complex (and slower) since you are now having to search multiple tables. But it does speed up inserts. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] serious performance problems with indexes
Peter De Rijk wrote: I have run into a serious performance problem with tables with many rows. The problem only occurs on tables with an index The time needed for an insert into a table with an index is dependend on the number of rows. I have not formally checked, but from my tests it looks like an exponential dependence. This of course means that while sqlite is very fast on smaller datasets, it is slow on larger data sets and becomes unusable on large datasets (million of rows). The insert behaviour is normal on non indexed tables, but obviously queries are a problem then. Is this index behaviour normal/expected for sqlite, or can this be solved? Peter, unless I am mistaken, inserting into a table that has an index requires the index to be rebuilt for each insert. Are you inserting multiple rows in a single transaction or not? If you could put them in a single transaction, I think the performance will remain adequate, even for very large tables. Regards, Arjen - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] serious performance problems with indexes
I have run into a serious performance problem with tables with many rows. The problem only occurs on tables with an index The time needed for an insert into a table with an index is dependend on the number of rows. I have not formally checked, but from my tests it looks like an exponential dependence. This of course means that while sqlite is very fast on smaller datasets, it is slow on larger data sets and becomes unusable on large datasets (million of rows). The insert behaviour is normal on non indexed tables, but obviously queries are a problem then. Is this index behaviour normal/expected for sqlite, or can this be solved? -- Dr Peter De Rijk E-mail: [EMAIL PROTECTED] Bioinformatics Unit Tel. +32 3 265 10 08 Department of Molecular Genetics VIB8Fax. +32 3 265 10 12 University of Antwerphttp://www.molgen.ua.ac.be/ Universiteitsplein 1 B-2610 Antwerpen, Belgium The glass is not half full, nor half empty. The glass is just too big. - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Benefits to use of "NOT NULL" where possible?
"Scott Hess" <[EMAIL PROTECTED]> wrote: > In some database systems, it can be beneficial to use "NOT NULL" as > much as possible when defining tables. It usually allows for a > slightly tighter storage encoding, and also allows some optimizations > to occur. > > AFAICT, in sqlite it only seems important for constraining the data > appropriately. For a column which does not contain null data, it > looks like the storage doesn't change between when it's defined with > NOT NULL or without (I'm guessing this is basically because of > manifest typing). Also, the EXPLAIN output doesn't change at all when > I add or remove the NOT NULL. So it would seem that the actual > performance would never change, nor would the storage footprint, so > long as you don't ever attempt to insert null data (in which case the > NOT NULL version would convert it to an appropriate default value, > which might cause changes to storage or performance). > > Anyone got holes to shoot in my reasoning? > The size of the database file is unchanged with NOT NULL. Adding NOT NULL might in theory make INSERT and UPDATE go a little slower, since now those statements must check the value of values before inserting them to make sure they are NOT NULL, but the performance difference is likely to be so slight as to be unmeasurable. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Error in SQLite's CSV output
T wrote: I can use a slightly modified CSV parsing handler, by just using this as the row separator (instead of just plain linefeed): ");INSERT INTO VALUES(" Neat :) Martin - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Undefined symbols with libsqlite3.a 3.3.8 when just building the shell with Xcode 2.4...
Hello, I just try building the standard SQLite 3.3.8 shell using a really simple Xcode projet just having the files "shell.c", "sqlite3.h" and "libsqlite3.a". The last two were obtained builing the sqlite sources as told by the readme: tar xf sqlite-3.3.8.tar mkdir build cd build/ ../sqlite-3.3.8/configure make From this, I extract the "libsqlite3.a" hidden in the ".libs" folder (why is it so hidden) and the "sqlite3.h" file... If I build using "Development" mode, I got no problem (it is dynamic linking, so it should make a runtime error) but with "Deployment" mode, I got an error: /usr/bin/ld: warning prebinding disabled because of undefined symbols /usr/bin/ld: Undefined symbols: _sqlite3_enable_load_extension _sqlite3_load_extension /Users/luc/Developments/Projects_BSD/SQLite_338/shellWithLib/build/ shellWithLib.build/Deployment/shellWithLib.build/Objects-normal/ppc/ shell.o reference to undefined _sqlite3_enable_load_extension /Users/luc/Developments/Projects_BSD/SQLite_338/shellWithLib/build/ shellWithLib.build/Deployment/shellWithLib.build/Objects-normal/ppc/ shell.o reference to undefined _sqlite3_load_extension collect2: ld returned 1 exit status Looking at the Makefile produced by "configure", it seems the file "loadext.c" is correctly added to the library... And I see nowhere that the symbol SQLITE_OMIT_LOAD_EXTENSION could be defined... Regards, Luc Demarche .. Luc Demarche [EMAIL PROTECTED] Mac OS Software Developer Pyramide Ingenierie sprl Tel: +32 87 292120 188 rue de Liege Fax: +32 87 292129 B-4800 VerviersMail: [EMAIL PROTECTED] ..
Re: [sqlite] reg:SqliteDB file
ok..Thank you.Let me try that and let you know. - Original Message - From: "Lloyd" <[EMAIL PROTECTED]> To:Sent: Friday, October 27, 2006 12:36 PM Subject: Re: [sqlite] reg:SqliteDB file > Whether your question mean to open database file which is in some other > location? > > If yes, > > we will be using the > > > int sqlite3_open(const char *filename,sqlite3 **ppDb); > > > call to open the database. In that we will have to mention the file > name. There instead of using the file name simply, give the full path of > the file. It will work fine. > > Thanks, > Lloyd. > > On Fri, 2006-10-27 at 12:17 +0530, sandhya wrote: > > Hi, > >Is it possible to refer db file from other than the Current project location.If Yes,How can we refer(tell) the location of the DB file to the project.Please help me. > > > > Thank you all, > > - Sandhya > > > __ > Scanned and protected by Email scanner > > -- --- > To unsubscribe, send email to [EMAIL PROTECTED] > -- --- > - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Benefits to use of "NOT NULL" where possible?
In some database systems, it can be beneficial to use "NOT NULL" as much as possible when defining tables. It usually allows for a slightly tighter storage encoding, and also allows some optimizations to occur. AFAICT, in sqlite it only seems important for constraining the data appropriately. For a column which does not contain null data, it looks like the storage doesn't change between when it's defined with NOT NULL or without (I'm guessing this is basically because of manifest typing). Also, the EXPLAIN output doesn't change at all when I add or remove the NOT NULL. So it would seem that the actual performance would never change, nor would the storage footprint, so long as you don't ever attempt to insert null data (in which case the NOT NULL version would convert it to an appropriate default value, which might cause changes to storage or performance). Anyone got holes to shoot in my reasoning? Thanks, scott [It's sort of hard to search for emails to sqlite-users about "NOT NULL". There are many references to "NOT NULL" in table examples people include with their questions :-).] - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] reg:SqliteDB file
Whether your question mean to open database file which is in some other location? If yes, we will be using the int sqlite3_open(const char *filename,sqlite3 **ppDb); call to open the database. In that we will have to mention the file name. There instead of using the file name simply, give the full path of the file. It will work fine. Thanks, Lloyd. On Fri, 2006-10-27 at 12:17 +0530, sandhya wrote: > Hi, >Is it possible to refer db file from other than the Current project > location.If Yes,How can we refer(tell) the location of the DB file to the > project.Please help me. > > Thank you all, > - Sandhya __ Scanned and protected by Email scanner - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] reg:SqliteDB file
Hi, Is it possible to refer db file from other than the Current project location.If Yes,How can we refer(tell) the location of the DB file to the project.Please help me. Thank you all, - Sandhya