Re: [sqlite] Vacuum command fails
Hi Simon,, Happy to provide you with answers...Answers Inline... One thing i would like to know is does the VACUUM command uses C: drive space or the drive where my application is running...It seems like it requires C: drive to have enough space. On Thursday, 17 July 2014 2:14 PM, Simon Slavin <slav...@bigfraud.org> wrote: On 17 Jul 2014, at 10:03pm, veeresh kumar <veeru...@yahoo.com> wrote: > I did google aswell :-)...I did set PRAGMA temp_store = 1 (PRAGMA temp_store > = FILE). But it did not help. I get SQLITE_NOMEM error message. What version of SQLite are you using ? The simplest way is probably to tell us the result of "sqlite_version()" give by your own application. Veeresh: Sqlite Version 3.8.4.3 Are you using any PRAGMAs at all in your code ? In other words, can we assume that all PRAGMAs are set to their defaults ? Veeresh: Yes, we are using PRAGMA and all PRAGMA are set to default except PRAGMA temp_store which is set to 1 If you open your database in the SQLite shell tool and execute "PRAGMA integrity_check;" on it, do you get an error ? If you open your database in the SQLite shell tool and execute "VACUUM;" on it, do you still get an error ? > Its not a space issue as I do have enough space in the hard drive. Just to make this clear, do you have at least three times as much free space on your hard drive as the size of the database ? Veeresh: The application runs on Windows Earlier I was running the application on another machine where C: drive was having less space (like 1 GB) and D Drive had (20 GB free space) and the VACUUM failed there. Now I am running the the application on D: Drive (it has 841 GB free space) and C: drive has space (333 GB free). It looks like VACUUM uses C:drive space ??? Also command seems to be working fine... I am sorry for these detailed questions which may seem to be attacking you, but we have run out of obvious causes for this problem and are trying to check all the unlikely things. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Vacuum command fails
My earlier email didnt went due to size limit. May be it might come later. Here is my summary What I understood after this exercise i.e executing VACUUM command is - It requires C: drive free space and not the drive where your application is running. Since PRAGMA temp_store_directory is depricated, we dont have option to set the temp path. - We have to forcefully set PRAGMA temp_store = FILE in order for the command to use disk space and not RAM space.Default setting of temp_store isn't working in my case. I ran the application by making sure it meets above criteria and the command was successful. If my above understanding is correct, i guess we need to update help document. Thank you -Veeresh On Thursday, 17 July 2014 10:30 AM, Simon Slavin <slav...@bigfraud.org> wrote: On 17 Jul 2014, at 6:24pm, veeresh kumar <veeru...@yahoo.com> wrote: > I tried to execute Vacuum on the sqlite database of size 14GB and it failed > with an error "out of memory". I was under the impression that it copies the > database file and then performs vacuum on that file. Here it looks like its > using RAM. What's your PRAGMA temp_store set to ? Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Vacuum command fails
I did google aswell :-)...I did set PRAGMA temp_store = 1 (PRAGMA temp_store = FILE). But it did not help. I get SQLITE_NOMEM error message. Its not a space issue as I do have enough space in the hard drive. Whenver Vacuum command is executed, its using the RAM and thats were it goes out of memory. On Thursday, 17 July 2014 12:11 PM, Roger Binns <rog...@rogerbinns.com> wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 17/07/14 10:42, veeresh kumar wrote: > When i execute the command PRAGMA temp_store, it returned me 0. What is > the ideal value that needs to be set? A quick google search would have found the answer: https://sqlite.org/pragma.html#pragma_temp_store Roger -BEGIN PGP SIGNATURE- Version: GnuPG v1 iEYEARECAAYFAlPIH8wACgkQmOOfHg372QRtLwCfYUzGhB4UKejmTT0qcVRRHNQy bRgAn2MdaOspER3bgVcwlgKjLq3G8akr =B7gd -END PGP SIGNATURE- ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Vacuum command fails
When i execute the command PRAGMA temp_store, it returned me 0. What is the ideal value that needs to be set? On Thursday, 17 July 2014 10:30 AM, Simon Slavin <slav...@bigfraud.org> wrote: On 17 Jul 2014, at 6:24pm, veeresh kumar <veeru...@yahoo.com> wrote: > I tried to execute Vacuum on the sqlite database of size 14GB and it failed > with an error "out of memory". I was under the impression that it copies the > database file and then performs vacuum on that file. Here it looks like its > using RAM. What's your PRAGMA temp_store set to ? Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Vacuum command fails
I tried to execute Vacuum on the sqlite database of size 14GB and it failed with an error "out of memory". I was under the impression that it copies the database file and then performs vacuum on that file. Here it looks like its using RAM. Appreciate your inputs from the experts. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Acceptable growth of WAL file size?
I saw below thing listed as disadvantage in http://www.sqlite.org/wal.html#ckpt . “WAL works best with smaller transactions. WAL does not work well for very large transactions. For transactions larger than about 100 megabytes, traditional rollback journal modes will likely be faster. For transactions in excess of a gigabyte, WAL mode may fail with an I/O or disk-full error. It is recommended that one of the rollback journal modes be used for transactions larger than a few dozen megabytes.” Query: -- I would like to understand if the WAL file size grows beyond 100 megabytes to range say 500 to 900 megabytes, what would be the impact ?. - Is it just a read/write performance impact ? Or - There will be disk failure or database would get corrupt ? We have decided to move our application running from TRUNCATE mode to WAL mode. To avoid WAL size growing larger, we are planning to have maintenance window, where application wont be doing any read/write to the database so check point can be run to reduce the WAL file size. As always appreciate experts valuable comments in making my application work well with sqlite database. Thank you -Veeresh ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Reader locks writer in truncate mode?
I should have been more specific. There are lots of threads, Identifying which thread is having the lock at any given time would have been helpful. Also need clarification on this one - A reader thread should still be able to read data from database until writer thread in transaction tries to commit the data? On Thursday, 13 March 2014 4:43 PM, Simon Slavin <slav...@bigfraud.org> wrote: On 13 Mar 2014, at 11:12pm, veeresh kumar <veeru...@yahoo.com> wrote: > Thanks a lot for the response. Is there any way to identify which > thread/process is actually blocking the reader thread or vise versa? Only in that it's the one that didn't get the error message. The one that's in the middle of a SQLite API call. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Reader locks writer in truncate mode?
Thanks a lot for the response. Is there any way to identify which thread/process is actually blocking the reader thread or vise versa? -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Simon Slavin Sent: Thursday, March 13, 2014 2:30 PM To: General Discussion of SQLite Database Subject: Re: [sqlite] Reader locks writer in truncate mode? On 13 Mar 2014, at 9:27pm, Igor Tandetnik <i...@tandetnik.org> wrote: > On 3/13/2014 5:24 PM, veeresh kumar wrote: >> In a multi-threaded application, say a reader thread has read 100 records from the table and reading is still in progress before which writer thread writes data and tries to commit. Its causing database lock.Is this expected? > > Assuming the two threads use two distinct database connections - yes, this is expected. Don't forget that by default SQLite does /no/ retries if there is an access clash. You can change this by setting a timeout: <https://sqlite.org/c3ref/busy_timeout.html> Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Reader locks writer in truncate mode?
In a multi-threaded application, say a reader thread has read 100 records from the table and reading is still in progress before which writer thread writes data and tries to commit. Its causing database lock.Is this expected? Note : journal_mode is set as truncate ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Manual call to sqlite3_wal_checkpoint
I have turned off using PRAGMA wal_autocheckpoint = 0. There is a background thread which would call sqlite3_wal_checkpoint at some interval of time to ensue that WAL size does not grow big. I dont see any error returned by API sqlite3_wal_checkpoint. Also I dont see WAL size being reduced after successful call to API. One thing i am sure is there is no write operations happening during this process. Is there something I am missing? Thank you -Veeresh ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Time taken to perform checkpoint operation and does it lock database during this operation?
I guess this is what happening in my application. The WAL size has grown to 1 GB...But again my concern is why each commit is taking long time. I see a big pause before commit happens. In a multi-threaded application, sqlite may have below limitations. Is this a fair statement? - Response time to the client application would increase because only 1 thread would be able to update database. - WAL size may grow since successful checkpoint operation is dependent on if any read operation are in a open transaction? Is there any improvements or suggestions or best practices that are being followed for Multi threaded application. Thank you, -Veeresh On Thursday, 20 February 2014 4:46 PM, Richard Hipp <d...@sqlite.org> wrote: On Thu, Feb 20, 2014 at 7:41 PM, veeresh kumar <veeru...@yahoo.com> wrote: Haven't measured the time, but I have seen a pause before commit happens. As stated during my earlier discussion, my service cannot pause for more than 10 sec as it would be result in time out for the clients that are connected to this service. > >What my understanding towards the checkpoint is taking my below settings as example, WAL size would never grow beyond 1MB because check point occurs after WAL file size reaches 1 MB. Please correct me if I am wrong. > Long-running or overlapping readers can prevent the checkpoint from occurring. The checkpoint will be retried again and again, but if there is always a read transaction open on a transaction other than the most recent transaction, the the checkpoint will never have an opportunity to run to completion and reset the WAL file. -- D. Richard Hipp d...@sqlite.org ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Time taken to perform checkpoint operation and does it lock database during this operation?
Haven't measured the time, but I have seen a pause before commit happens. As stated during my earlier discussion, my service cannot pause for more than 10 sec as it would be result in time out for the clients that are connected to this service. What my understanding towards the checkpoint is taking my below settings as example, WAL size would never grow beyond 1MB because check point occurs after WAL file size reaches 1 MB. Please correct me if I am wrong. Settings: PRAGMA synchronous=NORMAL;",with default auto check point and page size = 1024 bytes. On Thursday, 20 February 2014 4:21 PM, Richard Hipp <d...@sqlite.org> wrote: On Thu, Feb 20, 2014 at 7:08 PM, veeresh kumar <veeru...@yahoo.com> wrote: Hi, > I am using "PRAGMA >journal_mode=WAL;" with "PRAGMA synchronous=NORMAL;",with >default auto check point and page size = 1024 bytes. Since checkpoint >occurs automatically after every 1 MB, how much time it checkpoint operation >would take to complete as the database size grows large (range 1GB - 50 >GB). I understand that it depends on the hardrive, but on a very good >configuration, >will this operation ever exceeds 10 sec?Also during this operation, does the >database gets locked? > The time needed for a checkpoint depends much more on the size of the WAL file than on the size of the database. For a 1MB WAL file on modern hardware, I would think a checkpoint would require perhaps 50 to 100 milliseconds. Have you done measurements to see how long it takes on your system? The database cannot be written while a checkpoint is underway. But reads can run concurrently with a checkpoint. -- D. Richard Hipp d...@sqlite.org ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Time taken to perform checkpoint operation and does it lock database during this operation?
Hi, I am using "PRAGMA journal_mode=WAL;" with "PRAGMA synchronous=NORMAL;",with default auto check point and page size = 1024 bytes. Since checkpoint occurs automatically after every 1 MB, how much time it checkpoint operation would take to complete as the database size grows large (range 1GB - 50 GB). I understand that it depends on the hardrive, but on a very good configuration, will this operation ever exceeds 10 sec?Also during this operation, does the database gets locked? Thank you-Veeresh ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] wal_autocheckpoint
Yes, pages are of 1024 bytes and checkpoint is writing 500 MB. I see that I am unable to write data during checkpoint. Any solution? -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy Sent: Friday, January 31, 2014 10:01 AM To: sqlite-users@sqlite.org Subject: Re: [sqlite] wal_autocheckpoint On 02/01/2014 12:13 AM, veeresh kumar wrote: > Hi All, > > Below is my Sqlite settings. The current database size is ~ 8GB > > PRAGMA journal_mode = WAL > PRAGMA synchronous = NORMAL > PRAGMA wal_autocheckpoint = 50" > > With the above check point, the time taken to commit transaction to > actual database file after reaching a checkpoint is 2.5 to 3 minutes > for a 8 GB database.Is it taking acceptable time taken as per sqlite > standards? Are your pages 1024 bytes? If so, I guess the checkpoint is writing 500MB of (possibly) non-contiguous data to the database file. Somewhere in the neighbourhood of 3MB/second. Seems quite plausible. > > My concern is during this 2.5 – 3 minutes, Will application be able to > perform any “new write operation” Or does the application has to wait > for 2-3 minutes until merging of the data from WAL file to actual > database operation gets completed ? New writes are possible while a checkpoint is ongoing. Although, if you are using synchronous=FULL, they might be really slow, depending on the file-system. synchronous=NORMAL or OFF should be Ok. Something to test if it matters to you. Dan. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] wal_autocheckpoint
Hi All, Below is my Sqlite settings. The current database size is ~ 8GB PRAGMA journal_mode = WAL PRAGMA synchronous = NORMAL PRAGMA wal_autocheckpoint = 50" With the above check point, the time taken to commit transaction to actual database file after reaching a checkpoint is 2.5 to 3 minutes for a 8 GB database.Is it taking acceptable time taken as per sqlite standards? My concern is during this 2.5 – 3 minutes, Will application be able to perform any “new write operation” Or does the application has to wait for 2-3 minutes until merging of the data from WAL file to actual database operation gets completed ? Our application messaging is such that if the service does not respond to a message from client by 10 sec , it would give time out message. Is lowering the value of wal_autocheckpoint is the only way to handle this or is there any other way? Thank you -Veeresh ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Performance based on Sqlite Configuration
I am finding it hard which is the best configuration for most of the large scale application. Our database size grows from 0 – 45 GB . As the database size grows, performance seems to be degrading. Performance of the same application is better when it compared to Sql Server. I am in the middle of identifying the bottle neck and first thing I am looking for is the configuration. Current settings: PRAGMA journal_mode = TRUNCATE PRAGMA page_size;", lPageSize) UINT64 ui64MaxPageCount = 53687091200 / lPageSize; PRAGMA max_page_count = %I64u;", ui64MaxPageCount)); I am planning to change to below settings to see if that improves the performance. PRAGMA journal_mode = WAL PRAGMA page_size=4096 PRAGMA cache_size=16384 PRAGMA synchronous=NORMAL PRAGMA wal_autocheckpoint=10 Setting wal_autocheckpoint to 10 mean that the data gets committed to the disk after it reaches 100 MB , Am I right ? If the machine crashes or power goes off during this time, 100 MB data is lost? Any API calls which would commit the data to disk forcibly? Let me know if the above settings would improve the application performance or something needs to be configured. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] performance difference based on the sqlite configuration?
I do have a similar issue. I am finding it hard which is the best configuration for most of the large scale application. Our database size grows from 0 – 45 GB . As the database size grows, performance seems to be degrading. Major operations include insertion/read/delete Current settings: PRAGMA journal_mode = TRUNCATE PRAGMA page_size;", lPageSize) UINT64 ui64MaxPageCount = 53687091200 / lPageSize; //50 GB PRAGMA max_page_count = %I64u;", ui64MaxPageCount)); I am planning to change it to below settings to see if that improves the performance. PRAGMA journal_mode = WAL pragma page_size=4096 pragma cache_size=16384 PRAGMA wal_autocheckpoint=10 Setting wal_autocheckpoint to 10 mean that the data gets committed to the disk after it reaches 100 MB , Am I right ? If the machine crashes or power goes off during this time, 100 MB data is lost? Any API calls which would commit the data to disk forcibly? Let me know if the above settings would improve the application performance or something needs to be configured. Thanks -Veeresh -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Richard Hipp Sent: Sunday, January 19, 2014 9:37 AM To: General Discussion of SQLite Database Subject: Re: [sqlite] Dramatic performance difference between different "PRAGMA synchronous" settings for bulk inserts In WAL mode with synchronous=NORMAL, SQLite only syncs (FlushFileBuffers() on windows) when it does a checkpoint operation. Checkpoints should be happening automatically whenever the WAL file exceeds about 1MB in size. For an 8GB database, probably there are about 8000 sync operations, therefore. If each takes about 2 seconds, that would pretty much account for the extra 4.5 hours. If you are creating a new database from scratch, it is safe to set synchronous=OFF. If you lose power in the middle, your database file will probably be corrupt, but since you were creating it from scratch you can easily recover just be starting the database creation process over again from the beginning. If you want to try running with synchronous=NORMAL, you might try setting PRAGMA wal_autocheckpoint=10; (from the default of 1000) which will make for dramatically larger WAL files, but also dramatically fewer syncs. Then the syncs will use just 5 or 6 minutes instead of 4.5 hours. Hopefully. On Sun, Jan 19, 2014 at 9:00 AM, Mario M. Westphalwrote: > I have a performance effect which I don't quite understand. > Maybe I'm using the wrong settings or something. Sorry for the long > post, but I wanted to include all the info that may be important. > > My software is written in C++, runs on Windows 7/8, the SQLite > database file is either on a local SATA RAID disk or a SSD. > Typical database sizes are between 2 GB and 8 GB. > The largest tables hold several million entries. Also FTS4 is used, > which also creates large tables. > Fast internal RAID disks, SDD. Four Xeon cores. 8 GB RAM. > > I'm using SQLite 3.8.0.2 > WAL mode, shared cache enabled. > locking_mode=NORMAL > checkpoint_fullfsync=0 > pragma page_size=4096 > pragma cache_size=16384 > > General (retrieval) performance is excellent! > > > During an ingest phase, my application pumps in hundreds of thousands > of records into multiple tables. > There are massive amounts of writes during that phase, different > record sizes, tables with one to four indices etc. > > My application is multi-threaded and inserts data into the database > concurrently from multiple threads. > The threads process data in batches, and use SQLite transactions to > process all records of a batch into the database. Transactions gain a > lot of speed, which outweighs the side effects of potential blocking. > The threads monitor the execution times of the database operations and > adjust the batch size to balance speed and transaction lock duration. > Slower > operations cause smaller batches, which results in shorter database > locks and better concurrency. The system adapts fairly well to system > performance and data structure. > > The performance was not that bad, but far from good. > > For a given set of input data (100,000 "elements"), the execution > estimate was about 5 hours. > Database on a high-speed SSD. > The largest table holds about 5 million entries afterwards. > > ***With one single change*** I improved the execution time from 5 > hours down to about 30 minutes! > > I changed > > PRAGMA synchronous=NORMAL > > to > > PRAGMA synchronous=OFF > > Also all other database write operations just 'fly' now. > I'm even more impressed with SQLite than before, but I wonder why is > the change so _dramatic_ ? > > From the docs my impression was that using WAL mode is ideal for bulk > inserts. That wrapping large bulks of data into smaller batches, > wrapped in BEGIN/COMMT is best for performance etc. That using > synchronous=NORMAL limits the
[sqlite] Is there
Hi, I want to detect if a sqlite database is already connected to an application? Is there anyway sqlite API available for this? I have a use case where 2 different applications would try to connect to same database and I want to detect that and give info to the user. Thank you -Veeresh ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] REINDEX - Performance increase?
We are looking for database maintenance commands for the Sqlite database and came across 2 commands which we felt that it would improve the performance if used. The commands are VACUUM and REINDEX. I came to know that VACUUM just helps us to reclaim the space and does not give any performance increase. Just wanted to check of REINDEX or any other Sqlite command which you suggest users to run it every month? Our database size varies from 30GB to 50 GB,with various operations like insert/update/delete would be performed on day to day basis.Just wanted to know if there is a need to maintain the sqlite database so that the performance does not degrade or just leave as it is. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Vacuum command in a transaction?
Thanks Igor and Simon for your inputs. I was under the impression that VACUUM would also help performance since it does defragmentation. On Tuesday, 10 December 2013 3:02 PM, Simon Slavin <slav...@bigfraud.org> wrote: On 10 Dec 2013, at 8:04pm, veeresh kumar <veeru...@yahoo.com> wrote: > -If we put inside a transaction, definitely we would get a SQLITE_BUSY > error.Any way to avoid this error? It is normal for SQL engines to prevent VACUUM inside a transaction, and to lock the database from other threads/processes/users while it operates. For instance, Postgres will issue a consistent error message if you don't close your transaction before you do a VACUUM. Since it is not a 'real' SQL command (in that it operates on file structure, not tables, rows or columns) it shouldn't be a part of a transaction. Don't forget what VACUUM does: it doesn't operated just on data in the database, it reorganises the entire database file, and can move every thing into a different place and delete entire pages. It can even change _rowid_s. no way it can operate while allowing other SQL commands to work, so it may as well issue an explicit lock. > The database size that we use in the application can grow upto 50 GB. A 50GB database in SQLite is no problem, and no reason to run VACUUM. The time to run VACUUM is when (a) space is at a premium, or you need to take copies and (b) VACUUM will save significant space Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Vacuum command in a transaction?
The database size that we use in the application can grow upto 50 GB. We have an option of shrinking the database using the Vacuum command. I understand that Vacuum command consumes lots of time to execute,but i dont see any other way. I wanted to know what is the good way to execute Vacuum command. - Inside a transaction or without transaction? - What would happen if we terminate the Vacuum command in-between when its not executed in a transaction? Does the database gets corrupted if we stop/kill the operation inbetween? -If we put inside a transaction, definitely we would get a SQLITE_BUSY error.Any way to avoid this error? Thanks in advance. Veeresh ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] count (*) performance
Hi , I see that in release history for 3.8.1 below item. "Estimate the sizes of table and index rows and use the smallest applicable B-Tree for full scans and "count(*)" operations." Does it mean that performance of count(*) has been improved in 3.8.1 and if yes by how much? We were using count (*) to get row count in our application and it was terribly slow if we have 2 Million records.Just wanted to know if we upgrade from 3.8.0 to 3.8.1 or 3.8.2 will there be any improvement in the query performance? ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Disable specific foreign key constraint sqlite3
Hi, Is there a way to disable/enable specific foreign key constraint in sqlite3? Below is the sqlserver query , would like to know similar query in sqlite 3 ALTER TABLE tablename WITH NOCHECK NOCHECK CONSTRAINT FK_Column Thank you -Veeresh ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Convert Sqlserver script to Sqlite script
Hi, Is there any tool which would convert a sql server script to sqlite script? Thank you ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users