[sqlite] File Locking in WinCE
Hi, When I browse SQLite amalgamation file sqlite3.c , I found the following comment: /* ** WinCE lacks native support for file locking so we have to fake it ** with some code of our own. */ #if SQLITE_OS_WINCE typedef struct winceLock { int nReaders; /* Number of reader locks obtained */ BOOL bPending; /* Indicates a pending lock has been obtained */ BOOL bReserved; /* Indicates a reserved lock has been obtained */ BOOL bExclusive;/* Indicates an exclusive lock has been obtained */ } winceLock; #endif and I also see some other functions in sqlite3.c that look like replacements for LockFile(), UnlockFile() and LockFileEx(). Is there any problem with the existing LockFileEx() http://msdn.microsoft.com/en-us/library/ee489737.aspx and UnlockFileEx() http://msdn.microsoft.com/en-us/library/ee490757.aspx ? I am asking this because sometimes my WinCE application has a problem where the changes made in one instance a of sqlite3 database not reflected in the other instance b of the same database file even though they exist in the same process. Thread A: initialize sqlite3 instance a initialize sqlite3 instance b Thread B: modify via instance a Thread A: read via instance b Thread B: modify via instance a Thread A: close sqlite3 instance b initialize sqlite3 instance b read via instance b // the changes from a is sometimes not reflected I fixed the above problem by using instance a for both Thread A and Thread B since they are in the same process and hence eliminating instance b. but now I plan open the same database from other process so I am a bit worried about the concurrency in WinCE. I am using WinCE 6. Thank you, Afriza ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] handling of BLOB bound parameters
I just stumbled across a problem where sqlite would be stuck for quite a long time inside sqlite3VdbeExpandSql when using bound BLOB parameters, i.e. my query looks like INSERT INTO foo VALUES (?,?,?,?...) and one of the parameters is a BLOB of about 700k. What I found is that when this query is executed, SQLite will actually produce a string representation of the BLOB, which is done using the following loop: assert( pVar-flags MEM_Blob ); sqlite3StrAccumAppend(out, x', 2); for(i=0; ipVar-n; i++){ sqlite3XPrintf(out, %02x, pVar-z[i]0xff); } sqlite3StrAccumAppend(out, ', 1); Here, sqlite3XPrintf will call sqlite3StrAcuumAppend, which essentially mallocs a new block that is 3 bytes larger than the old block, copies the old data, appends 2 characters (hex digits) and frees the old block. This looks like it would be *very* inefficient. I haven't noticed this problem before, so I'm not sure there are other conditions that cause this code path to be used now, but I thought I'd raise the question nonetheless to clear this issue. (I'm using SQLite 3.6.22 on Mac OS X). Any ideas / answers? Thanks, -jens ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX OS
Hi Dan, I did some more investigation on the issue and i feel there is synchronization problem happening here. After mmapping the shm (wal index) file to process memory, the WAL indexes are written into the mmapped area, and this data is not getting synchronized with physical (shm) file. As a result when the mmap() function is called the second time to map the 32k-64k memory region, it is synchronizing the complete mmapped region (previous 32k regions) with physical file, even though a valid offset is passed. Not sure if this is the actual behaviour of mmap() call. While debugging, before the mmap() call i checked mmapped region and it had valid indexes, whereas after the call all became 0's. Also i found that the shm file is always filled with 0's even after commits. When i added the msync() statement (to sync the shm file) before mmap call as shown below, the problem is not seen. In this case the shm file has valid 32-bit indexes, as data is synchronized before next mmap call is executed. while(pShmNode-nRegion=iRegion){ int ret = msync( apNew[0], iRegion*szRegion, MS_SYNC); void *pMem = mmap(0, szRegion, PROT_READ|PROT_WRITE, MAP_SHARED, pShmNode-h, pShmNode-nRegion*szRegion ); With the above msync() call all my failed test cases are passing. I don't see any msync() call in the SQLite amalgamation/wal.c file. I believe the data in mapped region and physical file are not synched automatically. We need to explicitly do it using msync() call. Don't know if there is any other mechanism in SQLite through which the data is synchronized. Does the call to sqlite3OsSync() sync the shm file as well? or is the shm file not syned purposefully? This is all my understanding and not sure if this is causing the actual issue. Please guide me if my approach/understanding is incorrect. Thanks, Praveen -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy Sent: Friday, October 08, 2010 9:33 PM To: General Discussion of SQLite Database Subject: Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX OS On Oct 8, 2010, at 9:44 PM, Raj, Praveen wrote: Hello, I debugged the SQLite functions and here is my finding: The call to mmap in the function unixShmMap is causing the issue. void *pMem = mmap(0, szRegion, PROT_READ|PROT_WRITE, MAP_SHARED, pShmNode-h, iRegion*szRegion); It is setting the previous memory region/regions to zero while mapping the new ones. Mmap call internally uses the QNX API mmap64() to map the required memory region. Not sure on what is happening here. Just need to dig into memory mapping to find whats happening and hopefully find a solution. Dan - Do you have any idea on why this could be happening? Sounds like a bug in QNX to me. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users The information contained in this message may be confidential and legally protected under applicable law. The message is intended solely for the addressee(s). If you are not the intended recipient, you are hereby notified that any use, forwarding, dissemination, or reproduction of this message is strictly prohibited and may be unlawful. If you are not the intended recipient, please contact the sender by return e-mail and destroy all copies of the original message. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] handling of BLOB bound parameters
On Oct 14, 2010, at 5:43 PM, Jens Miltner wrote: I just stumbled across a problem where sqlite would be stuck for quite a long time inside sqlite3VdbeExpandSql when using bound BLOB parameters, i.e. my query looks like INSERT INTO foo VALUES (?,?,?,?...) and one of the parameters is a BLOB of about 700k. What I found is that when this query is executed, SQLite will actually produce a string representation of the BLOB, which is done using the following loop: assert( pVar-flags MEM_Blob ); sqlite3StrAccumAppend(out, x', 2); for(i=0; ipVar-n; i++){ sqlite3XPrintf(out, %02x, pVar-z[i]0xff); } sqlite3StrAccumAppend(out, ', 1); Here, sqlite3XPrintf will call sqlite3StrAcuumAppend, which essentially mallocs a new block that is 3 bytes larger than the old block, copies the old data, appends 2 characters (hex digits) and frees the old block. This looks like it would be *very* inefficient. I haven't noticed this problem before, so I'm not sure there are other conditions that cause this code path to be used now, but I thought I'd raise the question nonetheless to clear this issue. (I'm using SQLite 3.6.22 on Mac OS X). Any ideas / answers? It only does that if you have an sqlite3_trace() hook registered. If it's a problem, maybe you can clear the hook (by passing NULL to sqlite3_trace()) temporarily while executing queries that use large blobs. Or don't use it at all, if you can live without SQL tracing. Dan. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] handling of BLOB bound parameters
On Thursday, October 14, 2010, Jens Miltner j...@mac.com wrote: I just stumbled across a problem where sqlite would be stuck for quite a long time inside sqlite3VdbeExpandSql when using bound BLOB parameters, i.e. my query looks like INSERT INTO foo VALUES (?,?,?,?...) and one of the parameters is a BLOB of about 700k. What I found is that when this query is executed, SQLite will actually produce a string representation of the BLOB, What SQL statement did you use to create the table and how do you bind the blob? -afriza ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX OS
On Oct 14, 2010, at 5:53 PM, Raj, Praveen wrote: Hi Dan, I did some more investigation on the issue and i feel there is synchronization problem happening here. After mmapping the shm (wal index) file to process memory, the WAL indexes are written into the mmapped area, and this data is not getting synchronized with physical (shm) file. As a result when the mmap() function is called the second time to map the 32k-64k memory region, it is synchronizing the complete mmapped region (previous 32k regions) with physical file, even though a valid offset is passed. Not sure if this is the actual behaviour of mmap() call. While debugging, before the mmap() call i checked mmapped region and it had valid indexes, whereas after the call all became 0's. Also i found that the shm file is always filled with 0's even after commits. When i added the msync() statement (to sync the shm file) before mmap call as shown below, the problem is not seen. In this case the shm file has valid 32-bit indexes, as data is synchronized before next mmap call is executed. while(pShmNode-nRegion=iRegion){ int ret = msync( apNew[0], iRegion*szRegion, MS_SYNC); void *pMem = mmap(0, szRegion, PROT_READ|PROT_WRITE, MAP_SHARED, pShmNode-h, pShmNode-nRegion*szRegion ); With the above msync() call all my failed test cases are passing. I don't see any msync() call in the SQLite amalgamation/wal.c file. I believe the data in mapped region and physical file are not synched automatically. We need to explicitly do it using msync() call. Don't know if there is any other mechanism in SQLite through which the data is synchronized. Does the call to sqlite3OsSync() sync the shm file as well? or is the shm file not syned purposefully? It's true that the mapped region and physical file are not synced automatically. But mmap() still should not be zeroing regions that have already been mapped. This is a bug in mmap(). We don't sync it because we don't care if that file is written to persistent storage or not. The only reason we use a file located next to the database in the file-system instead of in /tmp or something is because it happens to be a convenient way to make sure all clients access the same shared memory. See the section entitled Implementation Of Shared-Memory For The WAL-Index here for more details: http://www.sqlite.org/wal.html It seems plausible that adding the msync() might work around the mmap() problem. Hard to be really confident though - there may be race conditions lurking... Thanks for looking into this. Dan. This is all my understanding and not sure if this is causing the actual issue. Please guide me if my approach/understanding is incorrect. Thanks, Praveen -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org ] On Behalf Of Dan Kennedy Sent: Friday, October 08, 2010 9:33 PM To: General Discussion of SQLite Database Subject: Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX OS On Oct 8, 2010, at 9:44 PM, Raj, Praveen wrote: Hello, I debugged the SQLite functions and here is my finding: The call to mmap in the function unixShmMap is causing the issue. void *pMem = mmap(0, szRegion, PROT_READ|PROT_WRITE, MAP_SHARED, pShmNode-h, iRegion*szRegion); It is setting the previous memory region/regions to zero while mapping the new ones. Mmap call internally uses the QNX API mmap64() to map the required memory region. Not sure on what is happening here. Just need to dig into memory mapping to find whats happening and hopefully find a solution. Dan - Do you have any idea on why this could be happening? Sounds like a bug in QNX to me. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users The information contained in this message may be confidential and legally protected under applicable law. The message is intended solely for the addressee(s). If you are not the intended recipient, you are hereby notified that any use, forwarding, dissemination, or reproduction of this message is strictly prohibited and may be unlawful. If you are not the intended recipient, please contact the sender by return e-mail and destroy all copies of the original message. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] handling of BLOB bound parameters
Am 14.10.2010 um 12:56 schrieb Dan Kennedy: On Oct 14, 2010, at 5:43 PM, Jens Miltner wrote: I just stumbled across a problem where sqlite would be stuck for quite a long time inside sqlite3VdbeExpandSql when using bound BLOB parameters, i.e. my query looks like INSERT INTO foo VALUES (?,?,?,?...) and one of the parameters is a BLOB of about 700k. What I found is that when this query is executed, SQLite will actually produce a string representation of the BLOB, which is done using the following loop: assert( pVar-flags MEM_Blob ); sqlite3StrAccumAppend(out, x', 2); for(i=0; ipVar-n; i++){ sqlite3XPrintf(out, %02x, pVar-z[i]0xff); } sqlite3StrAccumAppend(out, ', 1); Here, sqlite3XPrintf will call sqlite3StrAcuumAppend, which essentially mallocs a new block that is 3 bytes larger than the old block, copies the old data, appends 2 characters (hex digits) and frees the old block. This looks like it would be *very* inefficient. I haven't noticed this problem before, so I'm not sure there are other conditions that cause this code path to be used now, but I thought I'd raise the question nonetheless to clear this issue. (I'm using SQLite 3.6.22 on Mac OS X). Any ideas / answers? It only does that if you have an sqlite3_trace() hook registered. If it's a problem, maybe you can clear the hook (by passing NULL to sqlite3_trace()) temporarily while executing queries that use large blobs. Or don't use it at all, if you can live without SQL tracing. Ah - ok - that makes sense - that's why I didn't notice this earlier (I only have tracing enabled for debugging purposes every now and then)... Thanks for the explanation. -jens ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX OS
I sent this before...have you tried this? According to the QNX mmap page http://www.qnx.com/developers/docs/6.3.0SP3/neutrino/lib_ref/m/mmap.html https://owa1.ngc.com/exchweb/bin/redir.asp?URL=http://www.qnx.com/developers/docs/6.3.0SP3/neutrino/lib_ref/m/mmap.html MAP_NOINIT When specified, the POSIX requirement that the memory be zeroed is relaxed. The physical memory being used for this allocation must have been previously freed with UNMAP_INIT_OPTIONAL for this flag to have any effect. This flag was added in the QNX Neutrino Core OS 6.3.2. Interesting that this claims it's a POSIX requirement but I don't think most any others do this. Michael D. Black Senior Scientist Advanced Analytics Directorate Northrop Grumman Information Systems From: sqlite-users-boun...@sqlite.org on behalf of Raj, Praveen Sent: Thu 10/14/2010 5:53 AM To: General Discussion of SQLite Database Subject: EXTERNAL:Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX OS Hi Dan, I did some more investigation on the issue and i feel there is synchronization problem happening here. After mmapping the shm (wal index) file to process memory, the WAL indexes are written into the mmapped area, and this data is not getting synchronized with physical (shm) file. As a result when the mmap() function is called the second time to map the 32k-64k memory region, it is synchronizing the complete mmapped region (previous 32k regions) with physical file, even though a valid offset is passed. Not sure if this is the actual behaviour of mmap() call. While debugging, before the mmap() call i checked mmapped region and it had valid indexes, whereas after the call all became 0's. Also i found that the shm file is always filled with 0's even after commits. When i added the msync() statement (to sync the shm file) before mmap call as shown below, the problem is not seen. In this case the shm file has valid 32-bit indexes, as data is synchronized before next mmap call is executed. while(pShmNode-nRegion=iRegion){ int ret = msync( apNew[0], iRegion*szRegion, MS_SYNC); void *pMem = mmap(0, szRegion, PROT_READ|PROT_WRITE, MAP_SHARED, pShmNode-h, pShmNode-nRegion*szRegion ); With the above msync() call all my failed test cases are passing. I don't see any msync() call in the SQLite amalgamation/wal.c file. I believe the data in mapped region and physical file are not synched automatically. We need to explicitly do it using msync() call. Don't know if there is any other mechanism in SQLite through which the data is synchronized. Does the call to sqlite3OsSync() sync the shm file as well? or is the shm file not syned purposefully? This is all my understanding and not sure if this is causing the actual issue. Please guide me if my approach/understanding is incorrect. Thanks, Praveen -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy Sent: Friday, October 08, 2010 9:33 PM To: General Discussion of SQLite Database Subject: Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX OS On Oct 8, 2010, at 9:44 PM, Raj, Praveen wrote: Hello, I debugged the SQLite functions and here is my finding: The call to mmap in the function unixShmMap is causing the issue. void *pMem = mmap(0, szRegion, PROT_READ|PROT_WRITE, MAP_SHARED, pShmNode-h, iRegion*szRegion); It is setting the previous memory region/regions to zero while mapping the new ones. Mmap call internally uses the QNX API mmap64() to map the required memory region. Not sure on what is happening here. Just need to dig into memory mapping to find whats happening and hopefully find a solution. Dan - Do you have any idea on why this could be happening? Sounds like a bug in QNX to me. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users The information contained in this message may be confidential and legally protected under applicable law. The message is intended solely for the addressee(s). If you are not the intended recipient, you are hereby notified that any use, forwarding, dissemination, or reproduction of this message is strictly prohibited and may be unlawful. If you are not the intended recipient, please contact the sender by return e-mail and destroy all copies of the original message. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] handling of BLOB bound parameters
On Thu, Oct 14, 2010 at 6:43 AM, Jens Miltner j...@mac.com wrote: I just stumbled across a problem where sqlite would be stuck for quite a long time inside sqlite3VdbeExpandSql when using bound BLOB parameters, i.e. my query looks like INSERT INTO foo VALUES (?,?,?,?...) and one of the parameters is a BLOB of about 700k. What I found is that when this query is executed, SQLite will actually produce a string representation of the BLOB, which is done using the following loop: assert( pVar-flags MEM_Blob ); sqlite3StrAccumAppend(out, x', 2); for(i=0; ipVar-n; i++){ sqlite3XPrintf(out, %02x, pVar-z[i]0xff); } sqlite3StrAccumAppend(out, ', 1); Any ideas / answers? That only happens when you are using sqlite3_trace() to record the text of each SQL statement as it is evaluated. And that normally only happens during debugging. Thanks, -jens ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users -- D. Richard Hipp d...@sqlite.org ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] How can I build the FTS3 extension as a Windows DLL?
We want to use FTS3 with Python, but Python’s built-in sqlite3 module doesn’t have FTS. Since we can’t re-compile Python on the target systems, I was hoping to load it as a DLL using load_extension(). Our C++ apps can certainly link a static SQLite with FTS3. Maybe there’s another way to get FTS3 working with a stock Python installation on Windows? N. On Oct 13, 2010, at 9:58 PM, Max Vlasov wrote: On Thu, Oct 14, 2010 at 1:04 AM, Nate Silva n...@desi.com wrote: (Although FTS can be compiled into SQLite, I would like a DLL so I can dynamically load the extension into environments where I have a pre-compiled SQLite that doesn’t have full text search.) Nate, I think compile-time errors in this case is just a part of the problem. For such logic to work one has to develop the api/library with this possible feature in mind. For example to resolve global variables or being ready for unexpected library unloading while the main db is still opened. If you really want to save space/memory, why don't you just prepare two versions of dlls, with and without fts3 and choose which one to load during run-time? Or if you want the variant without fts to be statically linked, it's not that hard to move function pointers to some struct and change the pointers in the struct either to static functions or dynamic ones depending on the state of the program. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Just want to double check on index need
I am porting an application (American Football Results Picking Competition) over from a Postgres databaseo to SQLite which involves some fairly intense queries. I am doing this partially to do some performance comparisons although I have some other reasons too. I just want to make sure that I am setting up the indexes to some of the tables correctly. Here is an example of a representative type of table CREATE TABLE div_winner_pick ( cid integer NOT NULL REFERENCES competition(cid) ON UPDATE CASCADE ON DELETE CASCADE, -- Competition ID confid character(3) NOT NULL REFERENCES conference(confid) ON UPDATE CASCADE ON DELETE CASCADE, --Conference ID divid character(1) NOT NULL REFERENCES division(divid) ON UPDATE CASCADE ON DELETE CASCADE, --Division ID uid integer NOT NULL REFERENCES participant(uid) ON UPDATE CASCADE ON DELETE CASCADE, --User ID tid character(3) NOT NULL REFERENCES team(tid) ON UPDATE CASCADE ON DELETE CASCADE, --Team who will win division submit_time bigint DEFAULT (strftime('%s','now')) NOT NULL, --Time of submission PRIMARY KEY (cid,confid,divid,uid) ); where the Primary key references several columns For this particular table - in my Postgres definition I created the following two indexes CREATE INDEX div_win_pick_uid_idx ON div_winner_pick (uid); CREATE INDEX div_win_pick_cid_idx ON div_winner_pick (cid); i.e. Two of the 4 fields that make up the primary key. and I was anticipating doing the same - or something similar - I am not yet convinced I don't need to do CREATE INDEX div_win_pick_uid_cid_idx ON div_winner_pick (uid,cid); However, I came across the following text on the SQLite Web Site as part of the explanation of the CREATE TABLE command INTEGER PRIMARY KEY columns aside, both UNIQUE and PRIMARY KEY constraints are implemented by creating an index in the database (in the same way as a CREATE UNIQUE INDEX statement would). Such an index is used like any other index in the database to optimize queries. As a result, there often no advantage (but significant overhead) in creating an index on a set of columns that are already collectively subject to a UNIQUE or PRIMARY KEY constraint. I just wanted to check that lack of advantage (and overhead) applies purely to an index across all columns of the primary key and that if I need the index across a lesser number of columns (because I am querying for all records that match where I can define the values cid and uid in the example above) it is still and advantage to create it separately. -- Alan Chandler http://www.chandlerfamily.org.uk ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX OS
Hi Micheal, Thanks Dan and Michael for all your inputs. I tried this approach as well, but didn't find any success. During unmapping i used the API munmap_flags() with UNMAP_INIT_OPTIONAL flag to avoid the zero initialization during the next mmaping. Another thought I have here is that the old mmapped regions may not be initialized with zeros, but instead the regions are getting synced with the data in disk file (which is full of zeros). Not sure if can happen with mmap() API though? Thanks, Praveen -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Black, Michael (IS) Sent: Thursday, October 14, 2010 5:21 PM To: General Discussion of SQLite Database Subject: Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX OS I sent this before...have you tried this? According to the QNX mmap page http://www.qnx.com/developers/docs/6.3.0SP3/neutrino/lib_ref/m/mmap.html https://owa1.ngc.com/exchweb/bin/redir.asp?URL=http://www.qnx.com/developers/docs/6.3.0SP3/neutrino/lib_ref/m/mmap.html MAP_NOINIT When specified, the POSIX requirement that the memory be zeroed is relaxed. The physical memory being used for this allocation must have been previously freed with UNMAP_INIT_OPTIONAL for this flag to have any effect. This flag was added in the QNX Neutrino Core OS 6.3.2. Interesting that this claims it's a POSIX requirement but I don't think most any others do this. Michael D. Black Senior Scientist Advanced Analytics Directorate Northrop Grumman Information Systems From: sqlite-users-boun...@sqlite.org on behalf of Raj, Praveen Sent: Thu 10/14/2010 5:53 AM To: General Discussion of SQLite Database Subject: EXTERNAL:Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX OS Hi Dan, I did some more investigation on the issue and i feel there is synchronization problem happening here. After mmapping the shm (wal index) file to process memory, the WAL indexes are written into the mmapped area, and this data is not getting synchronized with physical (shm) file. As a result when the mmap() function is called the second time to map the 32k-64k memory region, it is synchronizing the complete mmapped region (previous 32k regions) with physical file, even though a valid offset is passed. Not sure if this is the actual behaviour of mmap() call. While debugging, before the mmap() call i checked mmapped region and it had valid indexes, whereas after the call all became 0's. Also i found that the shm file is always filled with 0's even after commits. When i added the msync() statement (to sync the shm file) before mmap call as shown below, the problem is not seen. In this case the shm file has valid 32-bit indexes, as data is synchronized before next mmap call is executed. while(pShmNode-nRegion=iRegion){ int ret = msync( apNew[0], iRegion*szRegion, MS_SYNC); void *pMem = mmap(0, szRegion, PROT_READ|PROT_WRITE, MAP_SHARED, pShmNode-h, pShmNode-nRegion*szRegion ); With the above msync() call all my failed test cases are passing. I don't see any msync() call in the SQLite amalgamation/wal.c file. I believe the data in mapped region and physical file are not synched automatically. We need to explicitly do it using msync() call. Don't know if there is any other mechanism in SQLite through which the data is synchronized. Does the call to sqlite3OsSync() sync the shm file as well? or is the shm file not syned purposefully? This is all my understanding and not sure if this is causing the actual issue. Please guide me if my approach/understanding is incorrect. Thanks, Praveen -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy Sent: Friday, October 08, 2010 9:33 PM To: General Discussion of SQLite Database Subject: Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX OS On Oct 8, 2010, at 9:44 PM, Raj, Praveen wrote: Hello, I debugged the SQLite functions and here is my finding: The call to mmap in the function unixShmMap is causing the issue. void *pMem = mmap(0, szRegion, PROT_READ|PROT_WRITE, MAP_SHARED, pShmNode-h, iRegion*szRegion); It is setting the previous memory region/regions to zero while mapping the new ones. Mmap call internally uses the QNX API mmap64() to map the required memory region. Not sure on what is happening here. Just need to dig into memory mapping to find whats happening and hopefully find a solution. Dan - Do you have any idea on why this could be happening? Sounds like a bug in QNX to me. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users The information contained in this message may be confidential and legally protected under applicable law. The
[sqlite] Nth row of on sqlite DB
Hello, I’ve been adding and deleting rows from the sqlite database. Now the primary ID is non-sequential. 1. How do I get the nth entry in the database 2. AND, How do I get the first n rows from the database ? Thanks, Kavita ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX OS
Did you also add the MAP_NOINIT to the mmap() call? It sounds like exactly the behavior you're seeing. Did you try writing a stand-alone app to test this idea? Michael D. Black Senior Scientist Advanced Analytics Directorate Northrop Grumman Information Systems From: sqlite-users-boun...@sqlite.org on behalf of Raj, Praveen Sent: Thu 10/14/2010 10:44 AM To: General Discussion of SQLite Database Subject: EXTERNAL:Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX OS Hi Micheal, Thanks Dan and Michael for all your inputs. I tried this approach as well, but didn't find any success. During unmapping i used the API munmap_flags() with UNMAP_INIT_OPTIONAL flag to avoid the zero initialization during the next mmaping. Another thought I have here is that the old mmapped regions may not be initialized with zeros, but instead the regions are getting synced with the data in disk file (which is full of zeros). Not sure if can happen with mmap() API though? Thanks, Praveen -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Black, Michael (IS) Sent: Thursday, October 14, 2010 5:21 PM To: General Discussion of SQLite Database Subject: Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX OS I sent this before...have you tried this? According to the QNX mmap page http://www.qnx.com/developers/docs/6.3.0SP3/neutrino/lib_ref/m/mmap.html https://owa1.ngc.com/exchweb/bin/redir.asp?URL=http://www.qnx.com/developers/docs/6.3.0SP3/neutrino/lib_ref/m/mmap.html MAP_NOINIT When specified, the POSIX requirement that the memory be zeroed is relaxed. The physical memory being used for this allocation must have been previously freed with UNMAP_INIT_OPTIONAL for this flag to have any effect. This flag was added in the QNX Neutrino Core OS 6.3.2. Interesting that this claims it's a POSIX requirement but I don't think most any others do this. Michael D. Black Senior Scientist Advanced Analytics Directorate Northrop Grumman Information Systems From: sqlite-users-boun...@sqlite.org on behalf of Raj, Praveen Sent: Thu 10/14/2010 5:53 AM To: General Discussion of SQLite Database Subject: EXTERNAL:Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX OS Hi Dan, I did some more investigation on the issue and i feel there is synchronization problem happening here. After mmapping the shm (wal index) file to process memory, the WAL indexes are written into the mmapped area, and this data is not getting synchronized with physical (shm) file. As a result when the mmap() function is called the second time to map the 32k-64k memory region, it is synchronizing the complete mmapped region (previous 32k regions) with physical file, even though a valid offset is passed. Not sure if this is the actual behaviour of mmap() call. While debugging, before the mmap() call i checked mmapped region and it had valid indexes, whereas after the call all became 0's. Also i found that the shm file is always filled with 0's even after commits. When i added the msync() statement (to sync the shm file) before mmap call as shown below, the problem is not seen. In this case the shm file has valid 32-bit indexes, as data is synchronized before next mmap call is executed. while(pShmNode-nRegion=iRegion){ int ret = msync( apNew[0], iRegion*szRegion, MS_SYNC); void *pMem = mmap(0, szRegion, PROT_READ|PROT_WRITE, MAP_SHARED, pShmNode-h, pShmNode-nRegion*szRegion ); With the above msync() call all my failed test cases are passing. I don't see any msync() call in the SQLite amalgamation/wal.c file. I believe the data in mapped region and physical file are not synched automatically. We need to explicitly do it using msync() call. Don't know if there is any other mechanism in SQLite through which the data is synchronized. Does the call to sqlite3OsSync() sync the shm file as well? or is the shm file not syned purposefully? This is all my understanding and not sure if this is causing the actual issue. Please guide me if my approach/understanding is incorrect. Thanks, Praveen -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy Sent: Friday, October 08, 2010 9:33 PM To: General Discussion of SQLite Database Subject: Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX OS On Oct 8, 2010, at 9:44 PM, Raj, Praveen wrote: Hello, I debugged the SQLite functions and here is my finding: The call to mmap in the function unixShmMap is causing the issue. void *pMem = mmap(0, szRegion, PROT_READ|PROT_WRITE, MAP_SHARED, pShmNode-h, iRegion*szRegion); It is setting the previous memory region/regions to zero while mapping the new ones. Mmap call internally uses the QNX API mmap64() to map the
Re: [sqlite] Nth row of on sqlite DB
3. How can I make my primary ID remain sequential even after a delete of row. Can sqlite somehow realign the indices after a row in the middle is deleted ? On 10/14/10 10:53 AM, Kavita Raghunathan kavita.raghunat...@skyfiber.com wrote: Hello, I’ve been adding and deleting rows from the sqlite database. Now the primary ID is non-sequential. 1. How do I get the nth entry in the database 2. AND, How do I get the first n rows from the database ? Thanks, Kavita ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Nth row of on sqlite DB
On Thu, Oct 14, 2010 at 10:54 AM, Kavita Raghunathan kavita.raghunat...@skyfiber.com wrote: 3. How can I make my primary ID remain sequential even after a delete of row. Can sqlite somehow realign the indices after a row in the middle is deleted ? If you can change the primary ID (sic), by which, I am assuming you are referring to the primary key, then it won't really be the primary key. PK should be immutable. Create a separate ID that can be under your control, and change it as you wish. On 10/14/10 10:53 AM, Kavita Raghunathan kavita.raghunat...@skyfiber.com wrote: Hello, I’ve been adding and deleting rows from the sqlite database. Now the primary ID is non-sequential. 1. How do I get the nth entry in the database 2. AND, How do I get the first n rows from the database ? Both nth and first n depend upon the sort order. Sort the table the way you want, and then get the nth or the first n (use LIMIT/OFFSET) to get the desired rows. The db by itself has no sort order, although, if you have numeric PK, there is an implicit ascending sort on PK. Thanks, Kavita ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users -- Puneet Kishor http://www.punkish.org Carbon Model http://carbonmodel.org Charter Member, Open Source Geospatial Foundation http://www.osgeo.org Science Commons Fellow, http://sciencecommons.org/about/whoweare/kishor Nelson Institute, UW-Madison http://www.nelson.wisc.edu --- Assertions are politics; backing up assertions with evidence is science === ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Nth row of on sqlite DB
Please see comment On 10/14/10 11:02 AM, P Kishor punk.k...@gmail.com wrote: Hello, I¹ve been adding and deleting rows from the sqlite database. Now the primary ID is non-sequential. 1. How do I get the nth entry in the database 2. AND, How do I get the first n rows from the database ? Both nth and first n depend upon the sort order. Sort the table the way you want, and then get the nth or the first n (use LIMIT/OFFSET) to get the desired rows. The db by itself has no sort order, although, if you have numeric PK, there is an implicit ascending sort on PK. Yes, I have numeric PK. I don't want to sort the table, the way it was added in ascending order of PK, I want it the same way, because the entries are historical events based on order of occurance. Now, when I added, the PK was 1,2,3,...100. I have since say deleted 2,5,11 because the events are no longer relavant. Now I want the first 10 entries, which will be 1,3,4,6,7,8,9,10,12,13. You are saying I can use LIMIT/OFFSET to do that? I do not need to order. I'm looking for the simplest and most efficient way to do this. I know, arent we all :) Kavita Thanks, Kavita ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Nth row of on sqlite DB
I found an example: Select * from tblxyz order by ID where limit 1 offset n-1 Will retreive the nth row. In this case order by ID is probably not necessary as its already ordered with a few missing rows. Thank you! Kavita On 10/14/10 11:15 AM, Kavita Raghunathan kavita.raghunat...@skyfiber.com wrote: Please see comment On 10/14/10 11:02 AM, P Kishor punk.k...@gmail.com wrote: Hello, I¹ve been adding and deleting rows from the sqlite database. Now the primary ID is non-sequential. 1. How do I get the nth entry in the database 2. AND, How do I get the first n rows from the database ? Both nth and first n depend upon the sort order. Sort the table the way you want, and then get the nth or the first n (use LIMIT/OFFSET) to get the desired rows. The db by itself has no sort order, although, if you have numeric PK, there is an implicit ascending sort on PK. Yes, I have numeric PK. I don't want to sort the table, the way it was added in ascending order of PK, I want it the same way, because the entries are historical events based on order of occurance. Now, when I added, the PK was 1,2,3,...100. I have since say deleted 2,5,11 because the events are no longer relavant. Now I want the first 10 entries, which will be 1,3,4,6,7,8,9,10,12,13. You are saying I can use LIMIT/OFFSET to do that? I do not need to order. I'm looking for the simplest and most efficient way to do this. I know, arent we all :) Kavita Thanks, Kavita ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Just want to double check on index need
On Oct 14, 2010, at 10:43 PM, Alan Chandler wrote: I am porting an application (American Football Results Picking Competition) over from a Postgres databaseo to SQLite which involves some fairly intense queries. I am doing this partially to do some performance comparisons although I have some other reasons too. I just want to make sure that I am setting up the indexes to some of the tables correctly. Here is an example of a representative type of table CREATE TABLE div_winner_pick ( cid integer NOT NULL REFERENCES competition(cid) ON UPDATE CASCADE ON DELETE CASCADE, -- Competition ID confid character(3) NOT NULL REFERENCES conference(confid) ON UPDATE CASCADE ON DELETE CASCADE, --Conference ID divid character(1) NOT NULL REFERENCES division(divid) ON UPDATE CASCADE ON DELETE CASCADE, --Division ID uid integer NOT NULL REFERENCES participant(uid) ON UPDATE CASCADE ON DELETE CASCADE, --User ID tid character(3) NOT NULL REFERENCES team(tid) ON UPDATE CASCADE ON DELETE CASCADE, --Team who will win division submit_time bigint DEFAULT (strftime('%s','now')) NOT NULL, --Time of submission PRIMARY KEY (cid,confid,divid,uid) ); where the Primary key references several columns For this particular table - in my Postgres definition I created the following two indexes CREATE INDEX div_win_pick_uid_idx ON div_winner_pick (uid); CREATE INDEX div_win_pick_cid_idx ON div_winner_pick (cid); i.e. Two of the 4 fields that make up the primary key. and I was anticipating doing the same - or something similar - I am not yet convinced I don't need to do CREATE INDEX div_win_pick_uid_cid_idx ON div_winner_pick (uid,cid); However, I came across the following text on the SQLite Web Site as part of the explanation of the CREATE TABLE command INTEGER PRIMARY KEY columns aside, both UNIQUE and PRIMARY KEY constraints are implemented by creating an index in the database (in the same way as a CREATE UNIQUE INDEX statement would). Such an index is used like any other index in the database to optimize queries. As a result, there often no advantage (but significant overhead) in creating an index on a set of columns that are already collectively subject to a UNIQUE or PRIMARY KEY constraint. I just wanted to check that lack of advantage (and overhead) applies purely to an index across all columns of the primary key and that if I need the index across a lesser number of columns (because I am querying for all records that match where I can define the values cid and uid in the example above) it is still and advantage to create it separately. It should be clearer. Basically the index would be redundant if it contains the same columns in the same order as the primary key. Or a prefix thereof. i.e. the following indexes would be all be redundant (pure overhead for no benefit): CREATE INDEX x ON div_pick_winner(cid); CREATE INDEX x ON div_pick_winner(cid,confid); CREATE INDEX x ON div_pick_winner(cid,confid,divid); CREATE INDEX x ON div_pick_winner(cid,confid,divid,uid); Your index is not redundant though. Dan. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] How to optimize a multi-condition query
Hi all, I have a query that does not perform as fast as expected and would like to know if anyone here has an idea on how to optimize it. There are two tables, A and B. Both have fields chr, start, stop and strand. A has about 50k entries, B about 12k. Both contain intervals defined by start...stop. The intervals in B have length 1, those in A any possible length. I want to identify all intervals in B that fall into one of those in A and get the corresponding interval data from A for each match of B. Indexes has been created for all fields in A and B (e.g. create index name on A(chr, start, stop, strand)); The query is as follows: select * from a, b where a.chr = b.chr and a.strand = b.strand and a.start = b.start and a.stop = b.stop and b.start = a.stop and a.start = b.stop ; This query takes about 130 seconds, but if when I remove the two last conditions it drops to 0.5 seconds. Replacing both last conditions with conditions like b.start 1000 or b.start 0 the execution time will stay at 0.5 seconds. Any help appreciated. Thanks ! Hilmar ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Nth row of on sqlite DB
On Thu, Oct 14, 2010 at 11:15 AM, Kavita Raghunathan kavita.raghunat...@skyfiber.com wrote: Please see comment On 10/14/10 11:02 AM, P Kishor punk.k...@gmail.com wrote: Hello, I¹ve been adding and deleting rows from the sqlite database. Now the primary ID is non-sequential. 1. How do I get the nth entry in the database 2. AND, How do I get the first n rows from the database ? Both nth and first n depend upon the sort order. Sort the table the way you want, and then get the nth or the first n (use LIMIT/OFFSET) to get the desired rows. The db by itself has no sort order, although, if you have numeric PK, there is an implicit ascending sort on PK. Yes, I have numeric PK. I don't want to sort the table, the way it was added in ascending order of PK, I want it the same way, because the entries are historical events based on order of occurance. The database has no concept of order of occurrence, you do. So, create a column in which you can store the timestamp for when the row was created. Then, use that to order the result and use LIMIT/OFFSET to restrict the rows you want returned. Now, when I added, the PK was 1,2,3,...100. I have since say deleted 2,5,11 because the events are no longer relavant. Now I want the first 10 entries, which will be 1,3,4,6,7,8,9,10,12,13. You are saying I can use LIMIT/OFFSET to do that? I do not need to order. I'm looking for the simplest and most efficient way to do this. I know, arent we all :) There is nothing complicated or inefficient about enforcing the sort order that you want. You do need an order, because without an order there is no sense to first or first n. PK is generally controlled by the db (although, it doesn't have to be). PK definitely should be immutable, and non-reusable, unless you are enforcing FKs with cascade DELETEs because it might be a foreign key in another table. Kavita Thanks, Kavita ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users -- Puneet Kishor http://www.punkish.org Carbon Model http://carbonmodel.org Charter Member, Open Source Geospatial Foundation http://www.osgeo.org Science Commons Fellow, http://sciencecommons.org/about/whoweare/kishor Nelson Institute, UW-Madison http://www.nelson.wisc.edu --- Assertions are politics; backing up assertions with evidence is science === ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Just want to double check on index need
On 14/10/10 17:28, Dan Kennedy wrote: On Oct 14, 2010, at 10:43 PM, Alan Chandler wrote: CREATE TABLE div_winner_pick ( ... PRIMARY KEY (cid,confid,divid,uid) ); ... CREATE INDEX div_win_pick_uid_cid_idx ON div_winner_pick (uid,cid); ... It should be clearer. Basically the index would be redundant if it contains the same columns in the same order as the primary key. Or a prefix thereof. i.e. the following indexes would be all be redundant (pure overhead for no benefit): CREATE INDEX x ON div_pick_winner(cid); CREATE INDEX x ON div_pick_winner(cid,confid); CREATE INDEX x ON div_pick_winner(cid,confid,divid); CREATE INDEX x ON div_pick_winner(cid,confid,divid,uid); Your index is not redundant though. This is interesting - what if I changed the primary key to be PRIMARY KEY (uid,cid,confid,divid) Is that an optimisation that is useful to make? -- Alan Chandler http://www.chandlerfamily.org.uk ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Just want to double check on index need
On Oct 14, 2010, at 11:56 PM, Alan Chandler wrote: On 14/10/10 17:28, Dan Kennedy wrote: On Oct 14, 2010, at 10:43 PM, Alan Chandler wrote: CREATE TABLE div_winner_pick ( ... PRIMARY KEY (cid,confid,divid,uid) ); ... CREATE INDEX div_win_pick_uid_cid_idx ON div_winner_pick (uid,cid); ... It should be clearer. Basically the index would be redundant if it contains the same columns in the same order as the primary key. Or a prefix thereof. i.e. the following indexes would be all be redundant (pure overhead for no benefit): CREATE INDEX x ON div_pick_winner(cid); CREATE INDEX x ON div_pick_winner(cid,confid); CREATE INDEX x ON div_pick_winner(cid,confid,divid); CREATE INDEX x ON div_pick_winner(cid,confid,divid,uid); Your index is not redundant though. This is interesting - what if I changed the primary key to be PRIMARY KEY (uid,cid,confid,divid) Is that an optimisation that is useful to make? Quite possibly. If you change the primary key to the above then you can leave out the (uid, cid) index. All queries that would have used the (uid, cid) index will use the automatic index created by the PRIMARY KEY instead. One less index means a smaller database file and less work for SQLite to do when you insert/update/delete a row. Dan. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Is the SYNTAX diagram wrong
I seem to be doing plenty of SELECT * FROM a LEFT JOIN b WHERE ...; but looking at the syntax diagrams at http://www.sqlite.org/lang_select.html it looks like I have to follow LEFT with OUTER. Shouldn't the diagram allow OUTER to be bypassed? -- Alan Chandler http://www.chandlerfamily.org.uk ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Nth row of on sqlite DB
On 14/10/10 17:26, Kavita Raghunathan wrote: I found an example: Select * fromtblxyz order by ID where limit 1 offset n-1 Will retreive the nth row. In this case order by ID is probably not necessary as its already ordered with a few missing rows. It might happen to be ordered - but that is an implementation detail. Unless you use the ORDER BY clause the database may deliver the records in any order it wishes. It is not forced to use the ID order. -- Alan Chandler http://www.chandlerfamily.org.uk ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] How to optimize a multi-condition query
On 14 Oct 2010, at 5:46pm, Hilmar Berger wrote: The query is as follows: select * from a, b where a.chr = b.chr and a.strand = b.strand and a.start = b.start and a.stop = b.stop and b.start = a.stop and a.start = b.stop ; Indexes has been created for all fields in A and B (e.g. create index name on A(chr, start, stop, strand)); I not certain from your phrasing, but this may not do what you think. It is different matter to do create index Achr on A (chr) create index Astart on A (start) create index Astop on A (stop) ... to what you did above. One creates one index on the sequence of four variables, the other creates four indexes each on one variable. However, to make this SELECT go fast, CREAT INDEX Bkey ON B (chr,strand) CREAT INDEX Bstart ON B (start) CREAT INDEX Bstop ON B (stop) May help. Also I recommend doing this: http://www.sqlite.org/lang_analyze.html Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Nth row of on sqlite DB
I see. I was not aware of this because the display results of select * from TBLXYZ has always shown up in ascending order of ID. I don't recall a single instance where it was delivered out of order, and hence my assumption. Now I understand Kishore's comment better about the DB not knowing about occurance and having to use order by. Will do. Kavita On 10/14/10 12:10 PM, Alan Chandler a...@chandlerfamily.org.uk wrote: On 14/10/10 17:26, Kavita Raghunathan wrote: I found an example: Select * fromtblxyz order by ID where limit 1 offset n-1 Will retreive the nth row. In this case order by ID is probably not necessary as its already ordered with a few missing rows. It might happen to be ordered - but that is an implementation detail. Unless you use the ORDER BY clause the database may deliver the records in any order it wishes. It is not forced to use the ID order. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Is the SYNTAX diagram wrong
On Thu, Oct 14, 2010 at 1:05 PM, Alan Chandler a...@chandlerfamily.org.ukwrote: I seem to be doing plenty of SELECT * FROM a LEFT JOIN b WHERE ...; but looking at the syntax diagrams at http://www.sqlite.org/lang_select.html it looks like I have to follow LEFT with OUTER. Shouldn't the diagram allow OUTER to be bypassed? Up until recently, the syntax diagram was this: http://www.sqlite.org/docsrc/artifact/21606e4a148231cb739bffb698dd746b49b79a99 But that diagram allowed things like OUTER JOIN or NATURAL OUTER JOIN, which do not work in SQLite. So the diagram was changed to http://www.sqlite.org/docsrc/artifact/8d36306fc0388a51444e35f98369d29b35a005a2 But, as you observe, this new diagram precludes LEFT JOIN. So it has now been revised again to: http://www.sqlite.org/docsrc/artifact/21606e4a148231cb739bffb698dd746b49b79a99 Your code is correct. -- Alan Chandler http://www.chandlerfamily.org.uk ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users -- D. Richard Hipp d...@sqlite.org ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] What happens when PK reaches max integer
Hello, My Primary Key is an integer. In the lifetime of a product, it may increase to the maximum possible value of an integer. Our processor will make it overflow into a very large negative number and so on. My specific question is, if overflow occurs, how does sqlite deal with it? 1. What if its a large negative number ? 2. What if there is already a PK with the rolled over value, say 1? This maybe a common problem you all may have ran into. Hope you can help answer. Thanks, Kavita ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] What happens when PK reaches max integer
The sqlite int space is 64 bits. You'd have to generate 1 billion keys per second for ~600 years to exhaust the space. Is my math right? Eric Hello, My Primary Key is an integer. In the lifetime of a product, it may increase to the maximum possible value of an integer. Our processor will make it overflow into a very large negative number and so on. My specific question is, if overflow occurs, how does sqlite deal with it? 1. What if its a large negative number ? 2. What if there is already a PK with the rolled over value, say 1? This maybe a common problem you all may have ran into. Hope you can help answer. Thanks, Kavita ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users -- Eric A. Smith Aeropalmics (ayr o palm' iks), n.: The study of wind resistance conducted by holding a cupped hand out the car window. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] What happens when PK reaches max integer
On Thu, Oct 14, 2010 at 12:33:11PM -0500, Kavita Raghunathan scratched on the wall: Hello, My Primary Key is an integer. In the lifetime of a product, it may increase to the maximum possible value of an integer. Our processor will make it overflow into a very large negative number and so on. My specific question is, if overflow occurs, how does sqlite deal with it? 1. What if its a large negative number ? 2. What if there is already a PK with the rolled over value, say 1? http://www.sqlite.org/autoinc.html -- If no ROWID is specified on the insert, or if the specified ROWID has a value of NULL, then an appropriate ROWID is created automatically. The usual algorithm is to give the newly created row a ROWID that is one larger than the largest ROWID in the table prior to the insert. If the table is initially empty, then a ROWID of 1 is used. If the largest ROWID is equal to the largest possible integer (9223372036854775807) then the database engine starts picking positive candidate ROWIDs at random until it finds one that is not previously used. If no unused ROWID can be found after a reasonable number of attempts, the insert operation fails with an SQLITE_FULL error. If no negative ROWID values are inserted explicitly, then automatically generated ROWID values will always be greater than zero. [...] If a column has the type INTEGER PRIMARY KEY AUTOINCREMENT then a slightly different ROWID selection algorithm is used. The ROWID chosen for the new row is at least one larger than the largest ROWID that has ever before existed in that same table. If the table has never before contained any data, then a ROWID of 1 is used. If the table has previously held a row with the largest possible ROWID, then new INSERTs are not allowed and any attempt to insert a new row will fail with an SQLITE_FULL error. This maybe a common problem you all may have ran into. No, not normally. 64-bits is a *really* big domain. -j -- Jay A. Kreibich J A Y @ K R E I B I.C H Intelligence is like underwear: it is important that you have it, but showing it to the wrong people has the tendency to make them feel uncomfortable. -- Angela Johnson ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] What happens when PK reaches max integer
Thank you Eric and Jay. Your replies address my concerns. I mean, I'm no longer concerned :) Thanks, Kavita On 10/14/10 1:20 PM, Jay A. Kreibich j...@kreibi.ch wrote: On Thu, Oct 14, 2010 at 12:33:11PM -0500, Kavita Raghunathan scratched on the wall: Hello, My Primary Key is an integer. In the lifetime of a product, it may increase to the maximum possible value of an integer. Our processor will make it overflow into a very large negative number and so on. My specific question is, if overflow occurs, how does sqlite deal with it? 1. What if its a large negative number ? 2. What if there is already a PK with the rolled over value, say 1? http://www.sqlite.org/autoinc.html -- If no ROWID is specified on the insert, or if the specified ROWID has a value of NULL, then an appropriate ROWID is created automatically. The usual algorithm is to give the newly created row a ROWID that is one larger than the largest ROWID in the table prior to the insert. If the table is initially empty, then a ROWID of 1 is used. If the largest ROWID is equal to the largest possible integer (9223372036854775807) then the database engine starts picking positive candidate ROWIDs at random until it finds one that is not previously used. If no unused ROWID can be found after a reasonable number of attempts, the insert operation fails with an SQLITE_FULL error. If no negative ROWID values are inserted explicitly, then automatically generated ROWID values will always be greater than zero. [...] If a column has the type INTEGER PRIMARY KEY AUTOINCREMENT then a slightly different ROWID selection algorithm is used. The ROWID chosen for the new row is at least one larger than the largest ROWID that has ever before existed in that same table. If the table has never before contained any data, then a ROWID of 1 is used. If the table has previously held a row with the largest possible ROWID, then new INSERTs are not allowed and any attempt to insert a new row will fail with an SQLITE_FULL error. This maybe a common problem you all may have ran into. No, not normally. 64-bits is a *really* big domain. -j ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] gui for data entry
On Wed, 13 Oct 2010 18:15:54 +0300 Mihai Militaru mihai.milit...@gmx.com wrote: That is what I am using, but unless I am missing something, it doesn't let me create a form for data entry. Please pardon my distraction, Graham, I didn't read your post carefully. My recommendation was general-purpose. But I got this idea: what if you create such forms yourself, using the flexibility given by SQL? I don't know whether it would satisfy all your requirements, but at least for duplication of data you can easily use temporary tool tables with triggers, eg. you create a table using the required fields (both named purposefully), and then triggers attached to it can update different things on different target tables. You export (or write manually) this easy setup to an SQL file and import it every time you work, editing its content any time you need more features. The manager appears to support user defined functions in a language it doesn't specify and I don't recognize it (but I assume it's Javascript) so the possibilities seem to be unlimited. -- Mihai Militaru mihai.milit...@gmx.com ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users