Re: [sqlite] Bigger table and query optimization
"Stephen Oberholtzer" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > Idea: Submit a patch that allows LIKE expressions that start with a > fixed > string (i.e. don't start with '%') to use the index to improve > performance. (SQL Server 2000 does this.) http://www.sqlite.org/optoverview.html 4.0 The LIKE optimization Igor Tandetnik ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Bigger table and query optimization
On Tue, Sep 9, 2008 at 9:04 AM, Bruno Moreira Guedes <[EMAIL PROTECTED]>wrote: > 2008/9/9 P Kishor <[EMAIL PROTECTED]>: > > On 9/9/08, Bruno Moreira Guedes <[EMAIL PROTECTED]> wrote: > >> Hi people!! > >> > >> I'm getting problems with a bigger table. I'm doing a query like this: > >> > >> SELECT field FROM tables WHERE '%.' || somefield LIKE 'somestring' > > > > interesting variation on the syntax. Usually one would > > > > WHERE somefield LIKE '%.somestring' > > > > > > Yes... I need to get the rows where 'something' ENDS wifh > '.[field-value]'. But I really think this solution isn't very well. Store the field *twice* -- once normally, and once *backwards*. Put an index on the backwards column, and when searching, do: somefield_backwards >= 'gnitset' and somefield_backwards < 'gnitseu' (Note that I advanced the last character from 't' to 'u' and used < instead of <=) This will enable SQLite to use the index on the backwards column to efficiently find everything. Idea: Submit a patch that allows LIKE expressions that start with a fixed string (i.e. don't start with '%') to use the index to improve performance. (SQL Server 2000 does this.) Idea 2: Submit a patch adding a built-in 'reverse' function to assist in this. -- -- Stevie-O Real programmers use COPY CON PROGRAM.EXE ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Crashed on sqlite3_exec(pDb, "PRAGMA synchronous=OFF ", NULL, 0, &errMsg);
Hi all, I had a function to open the database and set some database properties as below: sqlite3_exec(pDb, "PRAGMA synchronous=OFF ", NULL, 0, &errMsg); and this function is crashed on some of the database but not all and return the message as below: Program received signal SIGSEGV, Segmentation fault. 0xb6f67ca5 in enterMem () at ../src/mem1.c:66 66 sqlite3_mutex_enter(mem.mutex); Current language: auto; currently c I have clue what is going on here. Your help is greatly appreciated. Thanks, JP ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Re installing original sqlite 3.1.3 on OS X
Hi -- I wonder if anybody can help? I recently tried to update my copy of sqlite on my machine (running OS X 10.4) and stupidly overwrote my system's copy (i.e. I didn't use the /usr/local prefix). Now OS X 10.4 is fairly broken, and as far as I can ascertain it seems the default install of sqlite that comes with OS 10.4 is a patched version of 3.1.3. I've tried installing the vanilla 3.1.3 but that doesn't seem to fix my problems. Does anybody have any suggestions on what I could do? Is there a patch for 3.1.3 for the OS X 10.4 version available? Any help/ideas greatly appreciated. Many Thanks, ~ Eliza Gu -- View this message in context: http://www.nabble.com/Reinstalling-original-sqlite-3.1.3-on-OS-X-tp19403429p19403429.html Sent from the SQLite mailing list archive at Nabble.com. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] char to int conversion
jerry wrote: > I have a CHAR field which is usually an integer. I would like to sort > this field as if it is an integer so that 1a 5b 10c 12xxx does not get > sorted as 10c 12xxx 1a 5b. I have successfully used something like > "ORDER BY CHARFIELDNAME - 0" which seems to convert the expression to > an integer the same way that atoi would. This is exactly what I want. I > would like to know if this is an accident or it is behavior that I can > count on for future versions of sqlite. Thank you. > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > I would suggest that you use a cast expression to force the data to be an integer. For strings such as your examples the cast will only use the initial characters that form a valid integer. order by cast(somefield as integer) The behavior you are seeing is well defined, but I think a little less clear. The subtraction operator requires numeric arguments so sqlite will coerce the string field into a numeric value before it is passed to the subtraction operator. This coercion works in the same way as cast operator, ie it ignores any non numeric suffix. HTH Dennis Cote ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] help with Memsys 5 and memory status variables
HI all, I'm trying to use the memsys5 subsystem (3.6.2). I've run the configs for the MEMSTATUS, HEAP, SCRATCHBUF: During processing I get an out of mem error returned from a select statement. Here is the output of various sqlite3 memory highwater marks. SQLITE_STATUS_MEMORY_USED [65469440] SQLITE_STATUS_PAGECACHE_USED [15985] SQLITE_STATUS_PAGECACHE_OVERFLOW [63883264] SQLITE_STATUS_SCRATCH_USED [2] SQLITE_STATUS_SCRATCH_OVERFLOW [0] SQLITE_STATUS_MALLOC_SIZE [524288] SQLITE_STATUS_PARSER_STACK [0] SQLITE_STATUS_PAGECACHE_SIZE [4360] SQLITE_STATUS_SCRATCH_SIZE [30696] Any suggestions as to where I'm running out of memory ? Also, I'm not certain why the pagecache is overflowing. thanks, Ken ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Insert statement is ok but after reopening the db data is missing ?
Lothar Behrens wrote: > > But when you say, that, if jornal files are open, transactions are > opened, I would set a > breakpoint at the line of code the transaction opens these jornal file > and I could look > arount there from who the transaction comes. > > Is that an option ? > > What function in the sqlite library does this ? > Yes, that is an option if you are using a source code version of SQLite, either the individual source files or the amalgamation file, sqlite3.c. The journal file is opened by the function pager_open_journal() at line 30868 in the amalgamation source for version 3.6.2. HTH Dennis Cote ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Insert statement is ok but after reopening the db data is missing ?
Am 09.09.2008 um 20:46 schrieb Dennis Cote: > Lothar Behrens wrote: >> >> I have added this function right after sqlite3_step, that does the >> prepared insert statement. >> >> int nReturn = sqlite3_step((sqlite3_stmt*)(*start)); >> >>int autocommit = sqlite3_get_autocommit(m_pDatabase); >> >>if (autocommit == 0) { >> wxLogError(_("Warning: Database is not in autocommit mode.\n")); >>} >> >> autocommit is always 1. Also I have thested the following: >> >> Open the application and opening the form to display first row -> no >> jornal file is opened, because no write is yet done. >> >> Adding some rows and navigating forward and backbackward -> jornal >> file is opened and I can see my data in the application. >> > > The fact that a journal file exists at this point implies that you are > still in a transaction. If you close the database without committing > this transaction, the changes that you can see in your application > will > be rolled back and lost (see H12019 at > http://www.sqlite.org/c3ref/close.html). > > Can you add a function to check the auto commit status in your main > line > code (i.e. where you are navigating and viewing the data)? > Hmm, I can add such a function beside the others to update my status line for sample. But I don't believe, that I do start any transaction. This is because if I simply open only this database form that makes these problems all additions will be stored and if I restart my application the data is still there as inserted. I only start a transaction, when I create tables, because I rewrite some statements. But this only happens, when the database is freshly created. Also the code is tested and the transaction is committed, otherwise the shema wouldn't exist after a restart. But when you say, that, if jornal files are open, transactions are opened, I would set a breakpoint at the line of code the transaction opens these jornal file and I could look arount there from who the transaction comes. Is that an option ? What function in the sqlite library does this ? Lothar > Dennis Cote > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > -- | Rapid Prototyping | XSLT Codegeneration | http://www.lollisoft.de Lothar Behrens Heinrich-Scheufelen-Platz 2 73252 Lenningen ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Trim everything that is entered into database
I'm guessing he means like automatically (IE using triggers). Which also would be my answer (use triggers). On Tue, Sep 9, 2008 at 1:52 PM, P Kishor <[EMAIL PROTECTED]> wrote: > On 9/9/08, Josh Millstein <[EMAIL PROTECTED]> wrote: >> Hello, >> >> Is there anyway to perform a trim to everything that is entered into a table >> instead of trimming before I put data in? > > aren't "perform a trim to everything that is entered into a table" and > "trimming before I put data in" the same actions? > > >> >> Thanks, >> Josh >> >> -- >> >> >> ___ >> sqlite-users mailing list >> sqlite-users@sqlite.org >> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users >> > > > -- > Puneet Kishor http://punkish.eidesis.org/ > Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/ > Open Source Geospatial Foundation (OSGeo) http://www.osgeo.org/ > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > -- // -- Enrique Ramirez Irizarry Lead Developer Indie Code Labs http://www.indiecodelabs.com ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] char to int conversion
I have a CHAR field which is usually an integer. I would like to sort this field as if it is an integer so that 1a 5b 10c 12xxx does not get sorted as 10c 12xxx 1a 5b. I have successfully used something like "ORDER BY CHARFIELDNAME - 0" which seems to convert the expression to an integer the same way that atoi would. This is exactly what I want. I would like to know if this is an accident or it is behavior that I can count on for future versions of sqlite. Thank you. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Insert statement is ok but after reopening the db data is missing ?
Lothar Behrens wrote: > > I have added this function right after sqlite3_step, that does the > prepared insert statement. > > int nReturn = sqlite3_step((sqlite3_stmt*)(*start)); > > int autocommit = sqlite3_get_autocommit(m_pDatabase); > > if (autocommit == 0) { > wxLogError(_("Warning: Database is not in autocommit mode.\n")); > } > > autocommit is always 1. Also I have thested the following: > > Open the application and opening the form to display first row -> no > jornal file is opened, because no write is yet done. > > Adding some rows and navigating forward and backbackward -> jornal > file is opened and I can see my data in the application. > The fact that a journal file exists at this point implies that you are still in a transaction. If you close the database without committing this transaction, the changes that you can see in your application will be rolled back and lost (see H12019 at http://www.sqlite.org/c3ref/close.html). Can you add a function to check the auto commit status in your main line code (i.e. where you are navigating and viewing the data)? Dennis Cote ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Fwd: Insert statement is ok but after reopening the db data is missing ?
> Prior post was too big :-( Here the short anser to my last try to figure out the current file name: > Now I have the result. It is the database file I am thinking to be in. > > seq name file > - > Warning: Unknown column: > 0 Warning: Unknown column: > main Warning: Unknown column: > /Users/lothar/develop/Projects/CPP/Test/GUI/wxWrapper/Template > Designer.db3 > > So, hmm, what's now the problem ? > It is the file I like to have opened. Lothar -- | Rapid Prototyping | XSLT Codegeneration | http://www.lollisoft.de Lothar Behrens Heinrich-Scheufelen-Platz 2 73252 Lenningen ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Trim everything that is entered into database
On 9/9/08, Josh Millstein <[EMAIL PROTECTED]> wrote: > Hello, > > Is there anyway to perform a trim to everything that is entered into a table > instead of trimming before I put data in? aren't "perform a trim to everything that is entered into a table" and "trimming before I put data in" the same actions? > > Thanks, > Josh > > -- > > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > -- Puneet Kishor http://punkish.eidesis.org/ Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/ Open Source Geospatial Foundation (OSGeo) http://www.osgeo.org/ ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Bigger table and query optimization
2008/9/9 Igor Tandetnik <[EMAIL PROTECTED]>: > Bruno Moreira Guedes > <[EMAIL PROTECTED]> wrote: >> I can split my 'something' in st0, st1, st2, stN... I have a '.' >> betwen this values. But the number of "st"s is undefined. I tried to >> do "the necessary" number comparsions, appending tokens: >> >> SELECT fields FROM sometable WHERE field = 'st0.st1.st2.st3' OR field >> = 'st1.st2.st3' OR field = 'st2.st3' OR field = 'st3'; >> >> Unfortunately, even creating a INDEX, the performance was decreased >> instead of being increased. > > Try formulating it this way: > > SELECT fields FROM sometable WHERE field IN ( >'st0.st1.st2.st3', 'st1.st2.st3', 'st2.st3', 'st3'); > > > As the number of pieces grows, it might be better to create and fill a > temporary table with suffixes, then do > > SELECT fields FROM sometable WHERE field IN >(select suffix from suffixes); > > Igor Tandetnik > > > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > Hi all!! Dennis, Thank you by your idea. I've been made some testing, and it seems to be the better solution. Unfortunately, my table has three fields(including the special rowid field), two integers and a varchar. The amount of data(about 500.000 rows) recently added increased 32Mb on database's file. So, I think maintaining a 'reverse copy' of varchar field will increase too many the database's file size - considering it'll grow about 10x in a few days. Igor, Thank you also. Using the 'IN' really solved the problem. The query runs fast. It'll require some 'manual string processing', because my application will have to generate the "list of something's tails", but as there's no big number of tails, it isn't a problem, and solve the needs by now. Thank you all people. It's hard to develop database applications with little database knowledge. Without you I couldn't make it works. Bruno M Guedes ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] memsys3 vs memsys5
>> Is there any >> reliable method to determine the minimum page-cache allocation size >> needed for a given page_size? >> > >sqlite3_status(SQLITE_STATUS_PAGECACHE_SIZE, ...) Unfortunately, there is extra overhead involved which is not handled by the page cache but is allocated by malloc() if I understand correctly. From reading the sources (some weeks ago, the exact numbers might have changed) I determined an extra of at least 152 bytes are allocated for each page. Reference is nByteHdr in pager.c, plus an extra 4 byte (on 32 bit) for the slot the page hash (probably a little more to accommodate for empty slots). Interestingly, the page cache overhead size can differ between SQLite versions and also depends on SQLITE_OMIT_... compiler settings. 152 bytes might not seem much but this is nearly 15% for a page cache size of 1024 bytes, and nearly 30% for a 512 byte page cache. So you safe overhead by choosing a larger page size. Concluding, memory requirements are much higher than just pragma page_size * pragma cache_size This can sum up to lots of RAM when using page_size = 10 or more to speed up indexing of very large databases. Users should be aware of this when running SQLite on extreme (low RAM or high size database) scenarios. Ralf ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] documentation for memsy5 PAGESIZE
Hi list. Just thought I'd pass this along as a suggestion: I mistakenly called: sz = pgSz * pgCache... pBuf = malloc (sz). sqlite3_config(SQLITE_CONFIG_PAGECACHE,pBuf, sz, pgCache ); This functioned however the sqlite MallocInit cored due to a pageSize of 32Meg. The documentation does clearly indicate that the sz is the size of the page, but Maybe something in the comment section as well, #define SQLITE_CONFIG_PAGECACHE 7 /* void*, int sz, int N */Maybe changing this too: would be clearer ? #define SQLITE_CONFIG_PAGECACHE 7 /* void*, int pgSZ, int N */ Maybe an internal sanity check could be implemented in sqlite3_config as well? ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Trim everything that is entered into database
Josh Millstein <[EMAIL PROTECTED]> wrote: > Is there anyway to perform a trim to everything that is entered into > a table instead of trimming before I put data in? I'm not sure I understand the question. Are you perhaps thinking of update mytable set myfield=trim(myfield); Igor Tandetnik ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Insert statement is ok but after reopening the db data is missing ?
Am 09.09.2008 um 17:20 schrieb Dennis Cote: > Jay A. Kreibich wrote: >> >> Everything you describe sounds exactly as if a transaction has been >> started, but is not committed. When you close the database, the >> transaction is automatically (and correctly) rolled back. This will >> also delete the journal file. >> >> I know you said you weren't trying to start a transaction, but you >> might double check that. Set a breakpoint right after the INSERT is >> finished and check to see if you have a journal file or not. You >> could also try issuing a "BEGIN" right after the INSERT. If you get >> an error, you're already inside a transaction. >> > > An easier and more accurate way to check may be to add a call to > sqlite3_get_autocommit() after your insert is complete. It will return > zero if there is an active transaction, and 1 if there is not (i.e. it > it in autocommit mode). > Ok, I have added this function right after sqlite3_step, that does the prepared insert statement. int nReturn = sqlite3_step((sqlite3_stmt*)(*start)); int autocommit = sqlite3_get_autocommit(m_pDatabase); if (autocommit == 0) { wxLogError(_("Warning: Database is not in autocommit mode.\n")); } autocommit is always 1. Also I have thested the following: Open the application and opening the form to display first row -> no jornal file is opened, because no write is yet done. Adding some rows and navigating forward and backbackward -> jornal file is opened and I can see my data in the application. Now I will try to use pragma database_list; but I need to restart my box. So I will mail the result later :-( Lothar -- | Rapid Prototyping | XSLT Codegeneration | http://www.lollisoft.de Lothar Behrens Heinrich-Scheufelen-Platz 2 73252 Lenningen ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Trim everything that is entered into database
Hello, Is there anyway to perform a trim to everything that is entered into a table instead of trimming before I put data in? Thanks, Josh -- ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Bigger table and query optimization
Bruno Moreira Guedes <[EMAIL PROTECTED]> wrote: > I can split my 'something' in st0, st1, st2, stN... I have a '.' > betwen this values. But the number of "st"s is undefined. I tried to > do "the necessary" number comparsions, appending tokens: > > SELECT fields FROM sometable WHERE field = 'st0.st1.st2.st3' OR field > = 'st1.st2.st3' OR field = 'st2.st3' OR field = 'st3'; > > Unfortunately, even creating a INDEX, the performance was decreased > instead of being increased. Try formulating it this way: SELECT fields FROM sometable WHERE field IN ( 'st0.st1.st2.st3', 'st1.st2.st3', 'st2.st3', 'st3'); As the number of pieces grows, it might be better to create and fill a temporary table with suffixes, then do SELECT fields FROM sometable WHERE field IN (select suffix from suffixes); Igor Tandetnik ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Query runs faster when repeating condition
Correcting a minor nit: Igor Tandetnik wrote: > Magnus Manske <[EMAIL PROTECTED]> > wrote: > The condition could be written a bit more laconically: > > pos1 BETWEEN 1 AND 100 > >> On my table (ca. 180K entries) this takes about 10 seconds for the >> above, which returns all rows. > > Since you are not benefitting from the index for this query, you might > get better performance by not using it: > > +pos1 BETWEEN 1 AND 100 > > The unary plus operator suppresses the use of the index on the affected > column, while having no effect on the value of the expression. > It seems that I am not the only one with misunderstanding that the unary plus operator has no effect on the value of the expression is wrong. See http://www.sqlite.org/cvstrac/tktview?tn=3279 to show one scenario where this operator affects the result set. Steve Friedman ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Question about vacuum
On Sep 9, 2008, at 10:10 PM, Ribeiro, Glauber wrote: > Hello, > > Here's another newbie question, but it doesn't look like the > documentation for vacuum covers this. > > I'm using SQLite to store an application's log. Suppose I have a > program > that runs nightly and deletes log records that are older than 60 days, > to keep the database from growing without boundaries. > > I understand that deleting the records doesn't shrink the database > file, > unless I also vacuum. However, will SQLite re-use the space taken > by the > deleted records to store new ones? (Would I be better off not doing > vacuum, and letting SQLite reuse the disk space instead?) I believe > this > is the case with PostgreSQL. It will reuse the freed space. Dan. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Insert statement is ok but after reopening the db data is missing ?
Jay A. Kreibich wrote: > > Everything you describe sounds exactly as if a transaction has been > started, but is not committed. When you close the database, the > transaction is automatically (and correctly) rolled back. This will > also delete the journal file. > > I know you said you weren't trying to start a transaction, but you > might double check that. Set a breakpoint right after the INSERT is > finished and check to see if you have a journal file or not. You > could also try issuing a "BEGIN" right after the INSERT. If you get > an error, you're already inside a transaction. > An easier and more accurate way to check may be to add a call to sqlite3_get_autocommit() after your insert is complete. It will return zero if there is an active transaction, and 1 if there is not (i.e. it it in autocommit mode). See http://www.sqlite.org/c3ref/get_autocommit.html for details. HTH Dennis Cote ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Bigger table and query optimization
2008/9/9 Dennis Cote <[EMAIL PROTECTED]>: > Bruno Moreira Guedes wrote: >> >> I can split my 'something' in st0, st1, st2, stN... I have a '.' >> betwen this values. But the number of "st"s is undefined. I tried to >> do "the necessary" number comparsions, appending tokens: >> >> SELECT fields FROM sometable WHERE field = 'st0.st1.st2.st3' OR field >> = 'st1.st2.st3' OR field = 'st2.st3' OR field = 'st3'; >> > > You could try this > > select fields from sometable > where substr(field, -length(:somestring)) = :somestring; > I need to check if: '.' || field is a substring of something, and if something But it should be useful. Thank you! > This will still require a full table scan and will not use an index, but > the overhead of testing if the field ends with the appropriate string > should be as small as possible. > > Your main problem is there is no way to use an index to match the end of > a string. If this is a common operation for you database, you may want > to add a field that stores the strings in reverse order. You can then > add an index on that string. What used to be the end of the string is > now the beginning of the reversed field, and can be searched quickly > using an index. > > You will need to create a user defined function to reverse the > characters of a string. > > reverse('string') => 'gnirts' > > With this function you could add a new field to your database and create > a suitable index on that field. > > alter table sometable add column reversed text; > update sometable set reversed = reverse(field); > create index field_reversed_idx on sometable(reversed); > > Now you can use the same function to reverse the string you are trying > to match and use a like comparison to locate the strings quickly using > the index (since the search string is now the prefix of string). > > select field from sometable > where reversed like reverse(:somestring) || '%'; > > HTH > Dennis Cote > > > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > Yes, it's the perfect solution!! I'll work to implement this. Thank you again!! Thank you all, Bruno ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Insert statement is ok but after reopening the db data is missing ?
On Tue, Sep 09, 2008 at 04:55:01PM +0200, Lothar Behrens scratched on the wall: > I prepare an insert statement and bind the parameters that afterwards > get executed with sqlite_step(). This function then returns SQLITE_DONE > and I prepare a new select statement wich let me display the just > inserted data in my database forms (database file not closed while > that). > Does someone have any more ideas how to narrow the problem ? > (After the insert statement until to closing of that file) Everything you describe sounds exactly as if a transaction has been started, but is not committed. When you close the database, the transaction is automatically (and correctly) rolled back. This will also delete the journal file. I know you said you weren't trying to start a transaction, but you might double check that. Set a breakpoint right after the INSERT is finished and check to see if you have a journal file or not. You could also try issuing a "BEGIN" right after the INSERT. If you get an error, you're already inside a transaction. -j -- Jay A. Kreibich < J A Y @ K R E I B I.C H > "'People who live in bamboo houses should not throw pandas.' Jesus said that." - "The Ninja", www.AskANinja.com, "Special Delivery 10: Pop!Tech 2006" ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Question about vacuum
Hello, Here's another newbie question, but it doesn't look like the documentation for vacuum covers this. I'm using SQLite to store an application's log. Suppose I have a program that runs nightly and deletes log records that are older than 60 days, to keep the database from growing without boundaries. I understand that deleting the records doesn't shrink the database file, unless I also vacuum. However, will SQLite re-use the space taken by the deleted records to store new ones? (Would I be better off not doing vacuum, and letting SQLite reuse the disk space instead?) I believe this is the case with PostgreSQL. Thanks, glauber ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Insert statement is ok but after reopening the db data is missing ?
Lothar Behrens wrote: > > Does someone have any more ideas how to narrow the problem ? > (After the insert statement until to closing of that file) > Try executing "pragma database_list;" after the insert. Double check the file name and path shown for the main database and ensure that is the same file you are looking at with your database browser. HTH Dennis Cote ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Insert statement is ok but after reopening the db data is missing ?
Hi, I am still struggle with my insert statement with bound parameters that - followed step by step in the VDBE - seems to be inserted but is not available after closing the database. (Checked with sqliteman) I have also created a small test application using my database API wrappers. In this sample I could insert the data, thus I cannot recreate the problem in a small application. Problem: I prepare an insert statement and bind the parameters that afterwards get executed with sqlite_step(). This function then returns SQLITE_DONE and I prepare a new select statement wich let me display the just inserted data in my database forms (database file not closed while that). I do a close of the database, have a breakpoint after that to see what sqliteman tells me: The table is empty and no jornal file is opened ! How can it be, that 1.) The insert statement succeeds (have been able to select them with a new statement while database is still opened) ? 2.) The engine doesn't store these changes, even I have seen these changes (I do not manually start a transaction that wouldn't be committed) ? Does someone have any more ideas how to narrow the problem ? (After the insert statement until to closing of that file) Thanks in advance, Lothar -- | Rapid Prototyping | XSLT Codegeneration | http://www.lollisoft.de Lothar Behrens Heinrich-Scheufelen-Platz 2 73252 Lenningen ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] memsys3 vs memsys5
On Sep 8, 2008, at 8:28 PM, Dave Toll wrote: > Hello list > > > > I'm using SQLite 3.6.2 on an embedded C platform, and I recently > switched from memsys3 to memsys5. SQLite memory usage > (sqlite3_memory_used()) for a fully-cached database nearly doubled > from > 4.7MB to 8.9MB (using the same DB file in each test). Is this > normal/expected? > > > > I've also played with SQLITE_CONFIG_PAGECACHE, which seems to be more > appropriate for a limited-memory system where the amount of data will > often be close to the memory limit. I discovered that it was necessary > to allow an overhead on top of the page_size when specifying the > page-cache allocation size (sz argument), otherwise the page-cache > would > not be used as the page-cache units were too small. Have you read http://www.sqlite.org/malloc.html > Is there any > reliable method to determine the minimum page-cache allocation size > needed for a given page_size? > sqlite3_status(SQLITE_STATUS_PAGECACHE_SIZE, ...) D. Richard Hipp [EMAIL PROTECTED] ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Bigger table and query optimization
Bruno Moreira Guedes wrote: > > I can split my 'something' in st0, st1, st2, stN... I have a '.' > betwen this values. But the number of "st"s is undefined. I tried to > do "the necessary" number comparsions, appending tokens: > > SELECT fields FROM sometable WHERE field = 'st0.st1.st2.st3' OR field > = 'st1.st2.st3' OR field = 'st2.st3' OR field = 'st3'; > You could try this select fields from sometable where substr(field, -length(:somestring)) = :somestring; This will still require a full table scan and will not use an index, but the overhead of testing if the field ends with the appropriate string should be as small as possible. Your main problem is there is no way to use an index to match the end of a string. If this is a common operation for you database, you may want to add a field that stores the strings in reverse order. You can then add an index on that string. What used to be the end of the string is now the beginning of the reversed field, and can be searched quickly using an index. You will need to create a user defined function to reverse the characters of a string. reverse('string') => 'gnirts' With this function you could add a new field to your database and create a suitable index on that field. alter table sometable add column reversed text; update sometable set reversed = reverse(field); create index field_reversed_idx on sometable(reversed); Now you can use the same function to reverse the string you are trying to match and use a like comparison to locate the strings quickly using the index (since the search string is now the prefix of string). select field from sometable where reversed like reverse(:somestring) || '%'; HTH Dennis Cote ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Error A0A
[EMAIL PROTECTED] wrote: > I'm using some wrapper but modified. > Here is my Exec method: > > > if ( rc == SQLITE_BUSY) > { > Sleep(0); > rc = _sqlite3_reset(m_stmt); > continue; > } > You should not be resetting the prepared statement on a busy return. You should simply sleep and then continue to retry. If you want to ensure you don't loop forever, you could add a retry counter and do a reset and return if the retry limit is exceeded. HTH Dennis Cote ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Bigger table and query optimization
2008/9/9 Igor Tandetnik <[EMAIL PROTECTED]>: > "Bruno Moreira Guedes" > <[EMAIL PROTECTED]> wrote in message > news:[EMAIL PROTECTED] >> Yes... I need to get the rows where 'something' ENDS wifh >> '.[field-value]'. But I really think this solution isn't very well. > > Can field-value contain a period? If not, you can just extract the > portion of 'something' after the last period, then run a straight > comparison: WHERE somefield='tail-of-somestring' . Such a comparison can > then be sped up with an index. > > Igor Tandetnik > > > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > Igor, I can split my 'something' in st0, st1, st2, stN... I have a '.' betwen this values. But the number of "st"s is undefined. I tried to do "the necessary" number comparsions, appending tokens: SELECT fields FROM sometable WHERE field = 'st0.st1.st2.st3' OR field = 'st1.st2.st3' OR field = 'st2.st3' OR field = 'st3'; Unfortunately, even creating a INDEX, the performance was decreased instead of being increased. Now I'm thinking about using length() to filter where the strings will be compared... After some testing, I increased a little of the performance, but it still very bad!! I still looking for hints!! Thank you Bruno M Guedes ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Bigger table and query optimization
"Bruno Moreira Guedes" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > Yes... I need to get the rows where 'something' ENDS wifh > '.[field-value]'. But I really think this solution isn't very well. Can field-value contain a period? If not, you can just extract the portion of 'something' after the last period, then run a straight comparison: WHERE somefield='tail-of-somestring' . Such a comparison can then be sped up with an index. Igor Tandetnik ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Bigger table and query optimization
2008/9/9 P Kishor <[EMAIL PROTECTED]>: > On 9/9/08, Bruno Moreira Guedes <[EMAIL PROTECTED]> wrote: >> Hi people!! >> >> I'm getting problems with a bigger table. I'm doing a query like this: >> >> SELECT field FROM tables WHERE '%.' || somefield LIKE 'somestring' > > interesting variation on the syntax. Usually one would > > WHERE somefield LIKE '%.somestring' > > Yes... I need to get the rows where 'something' ENDS wifh '.[field-value]'. But I really think this solution isn't very well. >> >> The table has about 500.000 registers. When the table is short, it's >> too fast, but when it's populated it gets slow. I've been imagined it >> before. What may I do to optimize this query?? Should an INDEX help >> me, if it's comparing concatenated strings?? > > LIKE doesn't use INDEXes, so you are really out of luck with big > tables. Try some other approach. On big tables it will be slow, > although 500K "registers" (is a "register" the same as "rows"?) is > not much at all. Yes, with "registers" I'm talking about "rows". 500K rows increased about 34Mb on database's file size. And I fear it's just 'a little', because I'll probably import more 'bigger amounts' of data(each 'amount' with about 500K rows). > > I would try FTS. > > > >> >> Thank you all, >> Bruno >> ___ >> sqlite-users mailing list >> sqlite-users@sqlite.org >> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users >> > > > -- > Puneet Kishor http://punkish.eidesis.org/ > Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/ > Open Source Geospatial Foundation (OSGeo) http://www.osgeo.org/ > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > Thank you by the help, I'll read FTS docs right now!! ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Bigger table and query optimization
On 9/9/08, Bruno Moreira Guedes <[EMAIL PROTECTED]> wrote: > Hi people!! > > I'm getting problems with a bigger table. I'm doing a query like this: > > SELECT field FROM tables WHERE '%.' || somefield LIKE 'somestring' interesting variation on the syntax. Usually one would WHERE somefield LIKE '%.somestring' > > The table has about 500.000 registers. When the table is short, it's > too fast, but when it's populated it gets slow. I've been imagined it > before. What may I do to optimize this query?? Should an INDEX help > me, if it's comparing concatenated strings?? LIKE doesn't use INDEXes, so you are really out of luck with big tables. Try some other approach. On big tables it will be slow, although 500K "registers" (is a "register" the same as "rows"?) is not much at all. I would try FTS. > > Thank you all, > Bruno > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > -- Puneet Kishor http://punkish.eidesis.org/ Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/ Open Source Geospatial Foundation (OSGeo) http://www.osgeo.org/ ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Bigger table and query optimization
Hi people!! I'm getting problems with a bigger table. I'm doing a query like this: SELECT field FROM tables WHERE '%.' || somefield LIKE 'somestring' The table has about 500.000 registers. When the table is short, it's too fast, but when it's populated it gets slow. I've been imagined it before. What may I do to optimize this query?? Should an INDEX help me, if it's comparing concatenated strings?? Thank you all, Bruno ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Error A0A
I'm using some wrapper but modified. Here is my Exec method: while(1) { rc = _sqlite3_step(m_stmt); if (rc == SQLITE_ROW) return 1; if (rc == SQLITE_DONE) { _sqlite3_reset(m_stmt); return 0; } if ( rc == SQLITE_BUSY) { Sleep(0); rc = _sqlite3_reset(m_stmt); continue; } break; } Date: Mon, 8 Sep 2008 10:07:57 -0400 From: "Jeffrey Becker" <[EMAIL PROTECTED]> Subject: Re: [sqlite] Error A0A To: "General Discussion of SQLite Database" Message-ID: <[EMAIL PROTECTED]> Content-Type: text/plain; charset=ISO-8859-1 Are you using the native api or one of the wrappers? On Mon, Sep 8, 2008 at 9:58 AM, Dan <[EMAIL PROTECTED]> wrote: > > On Sep 8, 2008, at 1:47 PM, [EMAIL PROTECTED] wrote: > >> >> I'm using multiple connections to sqlite database from more threads >> in my application. Few customers reports sqlite error in windows >> vista which I cannot get in my system. >> Error code is 0xA0A (I haven't turned on extended error codes). >> This error number gives no sense to me because it is not defined in >> extended error codes too. >> Most of times it occurs when I'm trying to run "Commit" command, >> sometimes while SELECT. >> Anybady knows what can be the problem? > > 0xA0A is SQLITE_IOERR_DELETE. Indicates a delete operation has > failed. Could be something to do with anti-virus software preventing > a journal file from being deleted. > > Dan. > > ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] memsys3 vs memsys5
Hello list I'm using SQLite 3.6.2 on an embedded C platform, and I recently switched from memsys3 to memsys5. SQLite memory usage (sqlite3_memory_used()) for a fully-cached database nearly doubled from 4.7MB to 8.9MB (using the same DB file in each test). Is this normal/expected? I've also played with SQLITE_CONFIG_PAGECACHE, which seems to be more appropriate for a limited-memory system where the amount of data will often be close to the memory limit. I discovered that it was necessary to allow an overhead on top of the page_size when specifying the page-cache allocation size (sz argument), otherwise the page-cache would not be used as the page-cache units were too small. Is there any reliable method to determine the minimum page-cache allocation size needed for a given page_size? Cheers, Dave Toll. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users