Re: [sqlite] colname=22 vs colname='22'
On Tue, 2007-06-26 at 17:50 -0400, jose isaias cabrera wrote: > Greetings. > > I have the following db declarations: > > SQLite version 3.3.8 > Enter ".help" for instructions > sqlite> .schema > CREATE TABLE LSOpenJobs > ( > id integer primary key, ProjID integer, subProjID, parent, > children, login, cust, proj, PClass, PSubClass, bdate, ddate, edate, pm, > pmuk, lang, vendor, vEmail, invoice, ProjFund, PMTime, A_No, wDir, BiliDir, > TMDir, DeliveryDir, paid, paidDate, notes, status > ); > CREATE TABLE LSOpenProjects > ( >id integer primary key, ProjID integer, subProjID, parent, children, > login, cust, proj, PClass, PSubClass, bdate, ddate, edate, pm, pmuk, lang, > vendor, vEmail, invoice, ProjFund, PMTime, A_No, wDir, BiliDir, TMDir, > DeliveryDir, paid, paidDate, notes, status > ); > CREATE TABLE LSOpenSubProjects > ( >id integer primary key, ProjID integer, subProjID, parent, children, > login, cust, proj, PClass, PSubClass, bdate, ddate, edate, pm, pmuk, lang, > vendor, vEmail, invoice, ProjFund, PMTime, A_No, wDir, BiliDir, TMDir, > DeliveryDir, paid, paidDate, notes, status > ); > CREATE TABLE PMTime (id integer primary key, rec integer, date, secs > integer); > CREATE TABLE PMUserData > ( > login primary key, > Name, > Password, > email, > phone, > homephone, > Lang, > ProjOwned > ); > > If I do this call, > > sqlite> select * from LSOpenJobs where SubProjID='22'; > > I get nothing. If I do this call, > sqlite> select * from LSOpenJobs where SubProjID=22; > 106|22|22|22||...|c > 107|22|22|22||...|c > 108|22|22|22||...|c > 109|22|22|22||...|c > sqlite> > > I get stuff. Anybody would like to tell me why? I have done some command > prompt manual record deletion and edition, but it should not matter, > correct? It's about the differences between a string and a number, how they compare and when a string is converted to a number. See section 3 of this page: http://www.sqlite.org/datatype3.html Dan. > thanks, > > josé > > > - > To unsubscribe, send email to [EMAIL PROTECTED] > - > - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] SQLite Project. A contribution to manking
"Cesar D. Rodas" <[EMAIL PROTECTED]> wrote: > > I was surfing and I saw that sqlite website is very busy with a hight band > width traffic (http://alexa.com/data/details/traffic_details?url=sqlite.org) > and I was reading about the SQLite has an Server which is not the normal > apache or other known webserver, is that right? > > I am interesting to know more about this server, and configurations about > know more about the SQLite server configuration, what do you do Mr. Hipp for > handle a great quantity of traffic (server configuration, your wserver code > (if you can give it away) and a network connection)? If you can give that > information will be great, coz I think is better the Practice (and your site > is very very busy) than theory. > I was surfing and I found your article: http://cesarodas.com/2007/06/how-to-manage-thousands-visitors-part-ii-wwwsqliteorg.html Let me give you slightly more up-to-date statistics on the www.sqlite.org website. Traffic has been on a steady rise for some time now, and for the past week we've seen in excess of 1 unique IPs per day. The total number of hits is still running around 7/day. Bandwidth is over 3GB per day. CPU utilization is running about 4%. (It is unclear to me if that is 4% of the total CPU available on the physical host or 4% of my 1/20th slice of that host. Probably the former) I made a change to the althttpd.c server a couple of months ago where it automatically drops any connection from the msnbot or IE5 running on windows95 (as determined by the USER-AGENT parameter in the HTTP header.) Kicking off the msnbot resulted in a huge reduction in hits but with no reduction in the number of unique IPs. This means, of course, that SQLite is no longer listed on the MSN search, but nobody seems to use MSN so that is no big loss. And the MSN bot is downright abusive in the way it hammers a site. The load presented by the msnbot far exceeds the combined load of all other search engines on the internet combined. Go figure I used to get lots of hung win95/IE5 clients that would do things like download 5 copies of the tarball over the course of 8 hours. I would be getting 2 or 3 download requests per second. This was chewing through a lot of bandwidth so I just made the decision to ban win95/IE5 from the site. So far, no complaints have reached me. Perhaps someday my humble little website will be overwhelmed and I will have to switch to something like Apache which is designed to handle a heavy load. But for now, everything seems to be going along peachy. If it ain't broke, don't fix it -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
RE: [sqlite] Introducing... ManagedSQLite
> -Original Message- > From: Yves Goergen [mailto:[EMAIL PROTECTED] > Sent: Tuesday, June 26, 2007 4:55 PM > To: sqlite-users@sqlite.org > Subject: Re: [sqlite] Introducing... ManagedSQLite > > On 26.06.2007 00:24 CE(S)T, WHITE, DANIEL wrote: > > The main advantage of mine is that it is lightweight and easy to use > > without using ADO.NET. > > Okay, the other SQLite.NET DLL has 592 kB, that's not necessarily > "lightweight". I'd like to be able to include the SQLite library into > the main assembly so that I won't need a separate DLL anymore. I'm not > sure though how good that works with each solution or whether it's a > good idea for other reasons. > > But I'm quite happy with it without using ADO.NET. In fact, I > don't like > the idea of copying each and every record into memory and having it > sorted and filtered there (which ADO.NET propagates) instead of having > that done directly in the source database engine... The nice thing about ADO.NET (other than all the bells and whistles built on top of it) is that you can decide where you want to process things. If you're using a client/server model and want to process on the client-side, you can use the disconnected model that behaves as you've described. If you'd rather communicate directly with the database and process per-row as quickly as possible, you can use the DbCommand/DbDataReader model. The Command/DataReader model conforms very nicely with SQLite's sqlite3_prepare/sqlite3_step model and is a relatively thin layer on top of it. The 592k desktop version of the library is built for performance using PGO (profile guided optimizations) and includes both FTS1 and FTS2 modules (for now) as well as the ADO.NET wrapper assembly. While some may argue whether or not 592k is "lightweight", it's definitely not in the heavyweight category either. On the desktop, 592k is really considered a pittance. When built to minimize size on the Compact Framework w/o FTS1, the library is only 476kb. Considering the alternative databases available for the Compact Framework, that's roughly 1/3rd the nearest competitor's size. Robert - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Crashes and random wrong results with certain column names
Yves Goergen <[EMAIL PROTECTED]> wrote: > On 27.06.2007 01:35 CE(S)T, [EMAIL PROTECTED] wrote: > > http://www.sqlite.org/cvstrac/tktview?tn=2450 > > Wow, I guess from that page that it's already fixed? > > One question regarding the issue tracker: Is there a reference of what > the severity and priority values mean? Is a lower value more or less > severe/important? I can't figure that out from the complete listing. > The severity and priority can mean whatever you want :-) People do not use them consistently and so I usually ignore them CVSTrac, which is what is used for the issue tracker, is used in other situations where the severity and priority *are* used heavily. They just are not used here. More information at: http://www.cvstrac.org/ -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Crashes and random wrong results with certain column names
On 27.06.2007 01:35 CE(S)T, [EMAIL PROTECTED] wrote: > http://www.sqlite.org/cvstrac/tktview?tn=2450 Wow, I guess from that page that it's already fixed? One question regarding the issue tracker: Is there a reference of what the severity and priority values mean? Is a lower value more or less severe/important? I can't figure that out from the complete listing. -- Yves Goergen "LonelyPixel" <[EMAIL PROTECTED]> Visit my web laboratory at http://beta.unclassified.de - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Introducing... ManagedSQLite
On 26.06.2007 00:24 CE(S)T, WHITE, DANIEL wrote: > The main advantage of mine is that it is lightweight and easy to use > without using ADO.NET. Okay, the other SQLite.NET DLL has 592 kB, that's not necessarily "lightweight". I'd like to be able to include the SQLite library into the main assembly so that I won't need a separate DLL anymore. I'm not sure though how good that works with each solution or whether it's a good idea for other reasons. But I'm quite happy with it without using ADO.NET. In fact, I don't like the idea of copying each and every record into memory and having it sorted and filtered there (which ADO.NET propagates) instead of having that done directly in the source database engine... -- Yves Goergen "LonelyPixel" <[EMAIL PROTECTED]> Visit my web laboratory at http://beta.unclassified.de - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Crashes and random wrong results with certain column names
[EMAIL PROTECTED] wrote: You should file a bug ticket at http://www.sqlite.org/cvstrac/captcha?nxp=/cvstrac/tktnew since these are all valid quoted SQL identifiers. http://www.sqlite.org/cvstrac/tktview?tn=2450 I see I'm late to the party again. :-) I'm glad to see this was fixed so quickly. Dennis Cote - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Crashes and random wrong results with certain column names
Dennis Cote <[EMAIL PROTECTED]> wrote: > Yves Goergen wrote: > > Hi, > > > > I've tested my own SQLite application's identifier quoting capabilities > > now and found that the SQLite engine has serious problems with > > table/column names containing certain special characters. Just try the > > following: > > > > CREATE TABLE "t a" ("c a", """cb"""); > > INSERT INTO "t a" ("c a", """cb""") VALUES (1, 2); > > INSERT INTO "t a" ("c a", """cb""") VALUES (11, 12); > > INSERT INTO "t a" ("c a", """cb""") VALUES (21, 22); > > SELECT * FROM "t a"; > > > > You should file a bug ticket at > http://www.sqlite.org/cvstrac/captcha?nxp=/cvstrac/tktnew since these > are all valid quoted SQL identifiers. > http://www.sqlite.org/cvstrac/tktview?tn=2450 -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Crashes and random wrong results with certain column names
Yves Goergen wrote: Hi, I've tested my own SQLite application's identifier quoting capabilities now and found that the SQLite engine has serious problems with table/column names containing certain special characters. Just try the following: CREATE TABLE "t a" ("c a", """cb"""); INSERT INTO "t a" ("c a", """cb""") VALUES (1, 2); INSERT INTO "t a" ("c a", """cb""") VALUES (11, 12); INSERT INTO "t a" ("c a", """cb""") VALUES (21, 22); SELECT * FROM "t a"; The final select statement will either crash the client (this is true for the Win32 command line client version 3.4.0, which is simply terminated (it was more impressive with 3.3.6), and the ADO.NET adaption System.Data.SQLite) or, if you're "lucky", return random garbage data for some rows and columns. Also, when it does not crash, the result set column name for the "cb" column is simply labelled cb, i.e. without the double quotes. Viewing the original SQL CREATE TABLE statement from the sqlite_master table or using PRAGMA table_info("t a") will show the correct column names though. I could not see any problems yet in my short tests with spaces and double quotes in column names, but when the double quotes are the first and last character of a column's name, things start to get crazy. Is this a bug or am I simply not supposed to use such column names? (I was reading the formal SQL-92 syntax definition recently and thought, why not just try it out...) Yves, You should file a bug ticket at http://www.sqlite.org/cvstrac/captcha?nxp=/cvstrac/tktnew since these are all valid quoted SQL identifiers. Dennis Cote - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Multiple connections - stale cache?
"Mark Brown" <[EMAIL PROTECTED]> wrote: > > > REPEAT TEST 1 > - > DB Counter 0:0:42:-89 > DB Counter 0:0:42:-89 > DB Counter 0:0:42:-89 > DB Counter 0:0:42:-89 > DB Counter 0:0:42:-96 <-- What did my code do to cause this? This is going to be the bug right here. I would very much like to know what your code is doing right here. Was it the same database connection that printed out the last two lines? Or different connections? What operations occurred in between these two last lines? Are you sure your file locking code is working? -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] colname=22 vs colname='22'
Greetings. I have the following db declarations: SQLite version 3.3.8 Enter ".help" for instructions sqlite> .schema CREATE TABLE LSOpenJobs ( id integer primary key, ProjID integer, subProjID, parent, children, login, cust, proj, PClass, PSubClass, bdate, ddate, edate, pm, pmuk, lang, vendor, vEmail, invoice, ProjFund, PMTime, A_No, wDir, BiliDir, TMDir, DeliveryDir, paid, paidDate, notes, status ); CREATE TABLE LSOpenProjects ( id integer primary key, ProjID integer, subProjID, parent, children, login, cust, proj, PClass, PSubClass, bdate, ddate, edate, pm, pmuk, lang, vendor, vEmail, invoice, ProjFund, PMTime, A_No, wDir, BiliDir, TMDir, DeliveryDir, paid, paidDate, notes, status ); CREATE TABLE LSOpenSubProjects ( id integer primary key, ProjID integer, subProjID, parent, children, login, cust, proj, PClass, PSubClass, bdate, ddate, edate, pm, pmuk, lang, vendor, vEmail, invoice, ProjFund, PMTime, A_No, wDir, BiliDir, TMDir, DeliveryDir, paid, paidDate, notes, status ); CREATE TABLE PMTime (id integer primary key, rec integer, date, secs integer); CREATE TABLE PMUserData ( login primary key, Name, Password, email, phone, homephone, Lang, ProjOwned ); If I do this call, sqlite> select * from LSOpenJobs where SubProjID='22'; I get nothing. If I do this call, sqlite> select * from LSOpenJobs where SubProjID=22; 106|22|22|22||...|c 107|22|22|22||...|c 108|22|22|22||...|c 109|22|22|22||...|c sqlite> I get stuff. Anybody would like to tell me why? I have done some command prompt manual record deletion and edition, but it should not matter, correct? thanks, josé - To unsubscribe, send email to [EMAIL PROTECTED] -
RE: [sqlite] Multiple connections - stale cache?
I have some odd results to report on the db counter. I put some diagnostic code in our database wrapper class to write out the db counter whenever a statement is executed. It will print out for every statement executed whether the statement is a SELECT or UPDATE or BEGIN TRANSACTION. Richard had mentioned reading out bytes 24-27. Assuming this is a zero-based index, I was seeing the 28th byte in the file changing by 1, so I think I'm looking at the right value. And, not knowing if the counter was little or big endian, I just printed out all 4 bytes. Here's the truncated results of what I saw (removing a lot of duplicated DB Counter print outs) - the interesting part is what happens to the counter number in the second execution of the test, as it starts shifting around. Does this information help? BEGIN TEST 1 - DB Counter 0:0:42:-100 DB Counter 0:0:42:-100 DB Counter 0:0:42:-99 DB Counter 0:0:42:-98 DB Counter 0:0:42:-98 DB Counter 0:0:42:-98 DB Counter 0:0:42:-98 DBC1: Re-initializing session number to 333000 DB Counter 0:0:42:-98 DB Counter 0:0:42:-98 DB Counter 0:0:42:-98 DB Counter 0:0:42:-98 DB Counter 0:0:42:-97 DBC1: got sess num = 333000 DB Counter 0:0:42:-97 DBC2: Got session number: 333000 DB Counter 0:0:42:-97 DBC2: about to update sess info, num = 35 DB Counter 0:0:42:-97 DB Counter 0:0:42:-97 DB Counter 0:0:42:-96 DB Counter 0:0:42:-95 DB Counter 0:0:42:-94 DB Counter 0:0:42:-93 DB Counter 0:0:42:-93 DB Counter 0:0:42:-93 DB Counter 0:0:42:-93 DB Counter 0:0:42:-93 DB Counter 0:0:42:-92 DB Counter 0:0:42:-91 DB Counter 0:0:42:-90 DB Counter 0:0:42:-90 DB Counter 0:0:42:-90 DB Counter 0:0:42:-90 DBC2: Got session number: 35 DB Counter 0:0:42:-90 DBC2: about to update sess info, num = 350100 DB Counter 0:0:42:-90 DB Counter 0:0:42:-90 DB Counter 0:0:42:-89 DB Counter 0:0:42:-89 DB Counter 0:0:42:-89 DBC2: Got session number: 350100 -- END TEST -- REPEAT TEST 1 - DB Counter 0:0:42:-89 DB Counter 0:0:42:-89 DB Counter 0:0:42:-89 DB Counter 0:0:42:-89 DB Counter 0:0:42:-96 <-- What did my code do to cause this? I think we're doing a bunch of (delete from table) operations on several other tables in this database. DB Counter 0:0:42:-95 DB Counter 0:0:42:-95 DB Counter 0:0:42:-95 DB Counter 0:0:42:-95 DB Counter 0:0:42:-95 DBC1: Re-initializing session number to 333000 DB Counter 0:0:42:-95 DB Counter 0:0:42:-95 DB Counter 0:0:42:-95 DB Counter 0:0:42:-95 DB Counter 0:0:42:-94 DBC1: got sess num = 333000 DB Counter 0:0:42:-94 DBC2: Got session number: 350100 < Wrong value DB Counter 0:0:42:-94 DB Counter 0:0:42:-88 <--- counter went back to previous value DB Counter 0:0:42:-87 DB Counter 0:0:42:-86 DB Counter 0:0:42:-85 DB Counter 0:0:42:-85 DB Counter 0:0:42:-85 DB Counter 0:0:42:-85 DB Counter 0:0:42:-84 DB Counter 0:0:42:-83 DB Counter 0:0:42:-82 DB Counter 0:0:42:-82 DB Counter 0:0:42:-82 DB Counter 0:0:42:-82 DBC2: Got session number: 350100 Thanks for any help, Mark - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] thread concurrency, inserts using transactions, bug?
Brad, its my understanding that Locking occurs at the Database level, not the table level. http://www.sqlite.org/lockingv3.html Brad House <[EMAIL PROTECTED]> wrote: > Here are my results after modifying the "begin transaction" to a "begin exclusive" > Begin transaction is a bit Lazy in that the lock escalation doesnt occur > until the pager escalates the lock due to a write. > > You'll see that the begin exclusive acquires a lock immediately and avoids > the behavoir. Hi Ken, thanks for the reply. Begin exclusive is not a sufficient solution though as that would have an effect on _every_ transaction, not just a transaction modifying the same table. If the solution is indeed to use 'BEGIN EXCLUSIVE' why then are there different transactional modes at all, considering that would mean SQLite cannot handle anything else? BTW- I had actually switched my production code to use BEGIN EXCLUSIVE after finding this bug, that was a few weeks ago, until recently when I had the time to write the test case, so I was actually aware of that 'workaround'. Thanks. -Brad - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] thread concurrency, inserts using transactions, bug?
[EMAIL PROTECTED] wrote: >> It appears that if 2 threads start transactions at the same time, >> both inserting into the same table, neither thread can finish until >> one has rolled back. > The behavior is deliberate because it gives you, the programmer, > more control and better concurrency in some situations. But it > can also result in the deadlock behavior that you observe. > > The plain BEGIN will succeed even if another process is > already expressed and interest in writing to the database. > This allows you to read from the database concurrently > with the writer, if that is what you want to do. But > because another process is already writing, you will not be > able to write. And if you try to write, you will get into a > deadlock. > > The BEGIN IMMEDIATE, on the other hand, will fail with SQLITE_BUSY > if another process has already started writing. It will not > succeed until the other process commits. This is the perferred > thing to do if you intend to write within your transaction. Hmm, I just modified my test to make each thread write to a separate table, and the same symptom occurs. I guess I assumed that this wouldn't occur with multiple tables, but apparently the lock happens on a database-wide level. Now I'm just confused at the reason why transactions have any other mode besides IMMEDIATE (or EXCLUSIVE) in SQLite. I don't think any implementations would use transactions for read-only work, as I don't believe there is any benefit to using transactions in that scenario. I don't understand how _not_ using IMMEDIATE would give you better concurrency in any situation... If you were performing read-only queries, and not using a transaction at all, you should get the same behavior [as a standard BEGIN transaction, then performing the read-only query], if there was an outstanding RESERVED or EXCLUSIVE lock. Please enlighten me if I am wrong here, I'm still pretty new to SQLite. Thanks! -Brad - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Tomcat crashes with SQLite
I am just trying the last version of http://www.ch-werner.de/javasqlite/ date: June 26th 2007 Lucy will see if she is ok now... http://lucy.ysalaya.org Cheers Frederic de la Goublaye On 6/25/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: "Frederic de la Goublaye" <[EMAIL PROTECTED]> wrote: > My server is under FreeBSD 5.4 > It is ok ? > I am not aware of any problems with FreeBSD 5.4. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] thread concurrency, inserts using transactions, bug?
> Here are my results after modifying the "begin transaction" to a "begin > exclusive" > Begin transaction is a bit Lazy in that the lock escalation doesnt occur > until the pager escalates the lock due to a write. > > You'll see that the begin exclusive acquires a lock immediately and avoids > the behavoir. Hi Ken, thanks for the reply. Begin exclusive is not a sufficient solution though as that would have an effect on _every_ transaction, not just a transaction modifying the same table. If the solution is indeed to use 'BEGIN EXCLUSIVE' why then are there different transactional modes at all, considering that would mean SQLite cannot handle anything else? BTW- I had actually switched my production code to use BEGIN EXCLUSIVE after finding this bug, that was a few weeks ago, until recently when I had the time to write the test case, so I was actually aware of that 'workaround'. Thanks. -Brad - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] thread concurrency, inserts using transactions, bug?
Brad House <[EMAIL PROTECTED]> wrote: > I've read http://www.sqlite.org/cvstrac/wiki?p=MultiThreading under the > "Case in point: a benchmark application I've written for this purpose" > and found that current releases of SQLite do not appear to behave in > this manner. I cannot find any documentation which clearly states > the intended behavior. > > It appears that if 2 threads start transactions at the same time, > both inserting into the same table, neither thread can finish until > one has rolled back. > > The first thread succeeds until the COMMIT is issued, then returns BUSY > on the COMMIT. The second thread keeps returning BUSY on the INSERT > statement. I've created a test case which will retry on BUSY (up to > 25x to prevent infinite loops). You'll notice the test ultimately fails. > Also in this test case, I've created a test which does a ROLLBACK when > a BUSY is hit just to show that it does succeed. > > Is this intended functionality? It appears at least a release at some > point in time did not behave this way (Jan 10, 2003 from the Wiki). > Considering > the second thread never gets a successful response to an INSERT statement, > it would seem that it should not have tried to obtain a lock on that table, > preventing the COMMIT from succeeding... but it is... It seems to be a bug > to me. > The behavior is deliberate because it gives you, the programmer, more control and better concurrency in some situations. But it can also result in the deadlock behavior that you observe. You can easily work around this problem by always doing BEGIN IMMEDIATE Instead of just BEGIN When you start a transaction in which you intend to write. The plain BEGIN will succeed even if another process is already expressed and interest in writing to the database. This allows you to read from the database concurrently with the writer, if that is what you want to do. But because another process is already writing, you will not be able to write. And if you try to write, you will get into a deadlock. The BEGIN IMMEDIATE, on the other hand, will fail with SQLITE_BUSY if another process has already started writing. It will not succeed until the other process commits. This is the perferred thing to do if you intend to write within your transaction. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] thread concurrency, inserts using transactions, bug?
Brad, Here are my results after modifying the "begin transaction" to a "begin exclusive" Begin transaction is a bit Lazy in that the lock escalation doesnt occur until the pager escalates the lock due to a write. You'll see that the begin exclusive acquires a lock immediately and avoids the behavoir. Creating a table 0 => Executing: CREATE TABLE test_table(threadnum INT, cnt INT, testcol TEXT) 0 => started 1 => started all threads started 0 => Executing: BEGIN EXCLUSIVE 1 => Executing: BEGIN EXCLUSIVE 1 => BUSY 0 => Executing: INSERT INTO test_table VALUES(0, 0, 'test0_0') 1 => Executing: BEGIN EXCLUSIVE 1 => BUSY 0 => Executing: INSERT INTO test_table VALUES(0, 1, 'test0_1') 1 => Executing: BEGIN EXCLUSIVE 1 => BUSY 0 => Executing: INSERT INTO test_table VALUES(0, 2, 'test0_2') 1 => Executing: BEGIN EXCLUSIVE 1 => BUSY 0 => Executing: INSERT INTO test_table VALUES(0, 3, 'test0_3') 1 => Executing: BEGIN EXCLUSIVE 1 => BUSY 0 => Executing: INSERT INTO test_table VALUES(0, 4, 'test0_4') 1 => Executing: BEGIN EXCLUSIVE 1 => BUSY 0 => Executing: INSERT INTO test_table VALUES(0, 5, 'test0_5') 1 => Executing: BEGIN EXCLUSIVE 1 => BUSY 0 => Executing: INSERT INTO test_table VALUES(0, 6, 'test0_6') 1 => Executing: BEGIN EXCLUSIVE 1 => BUSY 0 => Executing: INSERT INTO test_table VALUES(0, 7, 'test0_7') 1 => Executing: BEGIN EXCLUSIVE 1 => BUSY 0 => Executing: INSERT INTO test_table VALUES(0, 8, 'test0_8') 1 => Executing: BEGIN EXCLUSIVE 1 => BUSY 0 => Executing: INSERT INTO test_table VALUES(0, 9, 'test0_9') 1 => Executing: BEGIN EXCLUSIVE 1 => BUSY 0 => Executing: COMMIT 0 => finished. 1 => Executing: BEGIN EXCLUSIVE 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => Executing: INSERT INTO test_table VALUES(1, 1, 'test1_1') 1 => Executing: INSERT INTO test_table VALUES(1, 2, 'test1_2') 1 => Executing: INSERT INTO test_table VALUES(1, 3, 'test1_3') 1 => Executing: INSERT INTO test_table VALUES(1, 4, 'test1_4') 1 => Executing: INSERT INTO test_table VALUES(1, 5, 'test1_5') 1 => Executing: INSERT INTO test_table VALUES(1, 6, 'test1_6') 1 => Executing: INSERT INTO test_table VALUES(1, 7, 'test1_7') 1 => Executing: INSERT INTO test_table VALUES(1, 8, 'test1_8') 1 => Executing: INSERT INTO test_table VALUES(1, 9, 'test1_9') 1 => Executing: COMMIT 1 => finished. exiting...(test succeeded)
[sqlite] thread concurrency, inserts using transactions, bug?
I've read http://www.sqlite.org/cvstrac/wiki?p=MultiThreading under the "Case in point: a benchmark application I've written for this purpose" and found that current releases of SQLite do not appear to behave in this manner. I cannot find any documentation which clearly states the intended behavior. It appears that if 2 threads start transactions at the same time, both inserting into the same table, neither thread can finish until one has rolled back. The first thread succeeds until the COMMIT is issued, then returns BUSY on the COMMIT. The second thread keeps returning BUSY on the INSERT statement. I've created a test case which will retry on BUSY (up to 25x to prevent infinite loops). You'll notice the test ultimately fails. Also in this test case, I've created a test which does a ROLLBACK when a BUSY is hit just to show that it does succeed. Is this intended functionality? It appears at least a release at some point in time did not behave this way (Jan 10, 2003 from the Wiki). Considering the second thread never gets a successful response to an INSERT statement, it would seem that it should not have tried to obtain a lock on that table, preventing the COMMIT from succeeding... but it is... It seems to be a bug to me. I have attached the test case. Any insight would be appreciated. Here are the results (for both RETRY_BUSY scenarios and ROLLBACK): $ gcc -Wall -D RETRY_BUSY=1 -W -o sqlitetest sqlitetest.c -l sqlite3 [EMAIL PROTECTED] ~ $ ./sqlitetest Creating a table 0 => Executing: CREATE TABLE test_table(threadnum INT, cnt INT, testcol TEXT) 0 => started 1 => started all threads started 0 => Executing: BEGIN TRANSACTION 1 => Executing: BEGIN TRANSACTION 0 => Executing: INSERT INTO test_table VALUES(0, 0, 'test0_0') 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: INSERT INTO test_table VALUES(0, 1, 'test0_1') 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: INSERT INTO test_table VALUES(0, 2, 'test0_2') 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: INSERT INTO test_table VALUES(0, 3, 'test0_3') 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: INSERT INTO test_table VALUES(0, 4, 'test0_4') 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: INSERT INTO test_table VALUES(0, 5, 'test0_5') 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: INSERT INTO test_table VALUES(0, 6, 'test0_6') 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: INSERT INTO test_table VALUES(0, 7, 'test0_7') 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: INSERT INTO test_table VALUES(0, 8, 'test0_8') 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: INSERT INTO test_table VALUES(0, 9, 'test0_9') 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: COMMIT 0 => BUSY 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: COMMIT 0 => BUSY 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: COMMIT 0 => BUSY 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: COMMIT 0 => BUSY 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: COMMIT 0 => BUSY 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: COMMIT 0 => BUSY 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: COMMIT 0 => BUSY 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: COMMIT 0 => BUSY 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: COMMIT 0 => BUSY 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: COMMIT 0 => BUSY 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: COMMIT 0 => BUSY 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: COMMIT 0 => BUSY 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: COMMIT 0 => BUSY 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: COMMIT 0 => BUSY 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: COMMIT 0 => BUSY 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => BUSY 0 => Executing: COMMIT 0 => BUSY 1 => Executing: INSERT INTO test_table VALUES(1, 0, 'test1_0') 1 => MAX BUSY CNT 1 => thread failed ... 0 => Executing: COMMIT 0 => finished. exiting...(test failed) [EMAIL PROTECTED] ~ $ gcc -Wall -D RETRY_BUSY=0 -W -o sqlitetest sqlitetest.c -l sqlite3 [EMAIL PROTECTED] ~ $ ./sqlitetest Creating a tabl
[sqlite] cache_size documentation vs. page_size setting
In the documentation for cache_size, the description for how much memory a page takes up says "Each page uses about 1.5K of memory.". I believe that it is more accurate to say that "Each page uses the database page_size plus about 512 bytes." I don't know how best to phrase it, save as a series of examples. "When the page_size is 1K (the default value), each cache page uses about 1.5K. When the page_size is 32K (the largest possible value), each cache page uses about 32.5K." from pagerAllocatePage(...): pPg = sqliteMallocRaw( sizeof(*pPg) + pPager->pageSize + sizeof(u32) + pPager->nExtra + MEMDB*sizeof(PgHistory) ); Best, andy
Re: [sqlite] [Delphi] Escaping quote?
"Clay Dowling" <[EMAIL PROTECTED]> wrote: > John Elrick wrote: > > >> A much better solution than QuotedStr is to use queries with parameters. > >> If you're going to be running the query multiple times it also gives you > >> a > >> speed boost. > >> > > > > True, however, that assumes you will be running the query multiple times > > in a row, which I haven't experienced in our particular project. > > Even if you aren't running the query multiple times, the parametric query > is a good idea. It avoids any possibility of SQL injection, due either to > malicious users or programming mistakes. > It is also faster, even if you are only doing the query once. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
RE: [sqlite] Multiple connections - stale cache?
Hi Richard and Joe- It probably is some application error in our code. What puzzles me is that calling sqlite3_enable_shared_cache(1) appears to fix (or at least change) the behavior. Not sure if that triggers any ideas as to what our problem might be. Your suggestion of looking at the database counter is a good one. I can change our app to simply read those bytes and print out the value prior to each statement we execute. Is that what you were thinking? We currently have the following command line switches: -DOS_OTHER=1 -DNO_TCL -DSQLITE_DISABLE_LFS -DTHREADSAFE -DSQLITE_ENABLE_LOCKING_STYLE Essentially, we are on vxWorks operating system, but our build looks just like OS_UNIX. The THREADSAFE and SQLITE_ENABLE_LOCKING_STYLE are new options we added recently, but we have the problem without these as well. I did have to comment out most of the locking styles except the .lock style (which we are using) due to unavailability of certain headers and functionality on vxWorks. Thanks for your help, Mark - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] [Delphi] Escaping quote?
John Elrick wrote: >> A much better solution than QuotedStr is to use queries with parameters. >> If you're going to be running the query multiple times it also gives you >> a >> speed boost. >> > > True, however, that assumes you will be running the query multiple times > in a row, which I haven't experienced in our particular project. Even if you aren't running the query multiple times, the parametric query is a good idea. It avoids any possibility of SQL injection, due either to malicious users or programming mistakes. Clay -- Simple Content Management http://www.ceamus.com - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] [Delphi] Escaping quote?
Clay Dowling wrote: John Elrick wrote: // Input := 'Let's meet at the pub tonight!'; MyFormat := 'insert into stuff (title) values (%s)'; SQL := Format(MyFormat, QuotedStr(Input)); try ASQLite3DB1.Database := db; ASQLite3DB1.DefaultDir := ExtractFileDir(Application.ExeName); ASQLite3DB1.Open; ASQLite3DB1.SQLite3_ExecSQL(SQL); ASQLite3DB1.Close; except ShowMessage('Bad'); end; A much better solution than QuotedStr is to use queries with parameters. If you're going to be running the query multiple times it also gives you a speed boost. True, however, that assumes you will be running the query multiple times in a row, which I haven't experienced in our particular project. John - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Multiple connections - stale cache?
"Mark Brown" <[EMAIL PROTECTED]> wrote: > Thanks for the quick reply. Unfortunately, we are developing code > on the vxWorks platform, so I don't think sample code would be of > use. We have seen the problem for some time now...at least from > 3.3.12. The logic in SQLite that handles cache synchronization was reworked in version 3.3.14. If you have been seeing the problem since 3.3.12, that suggests an application problem or possible a bug in your OS interface layer, not in the core SQLite. > The submitter of the ticket appears to have the exact same > scenario as us. Hopefully he can submit same sample code > that will help you track down the problem. As I pointed out in comments on the #2458 ticket, I can think of countless bugs in the application that can produce the same symptoms. There is no guarantee that this is an SQLite problem. And, in fact, until I have some evidence to the contrary, I'm working under the theory that this is an application bug not an SQLite bug. > > Would there be any diagnostics (i.e. printfs) that I could > enable that might be of use? > Bytes 24-27 of the database file contain a counter that is incremented every time the database file changes. A change in that counter is what triggers a cache flush. You might consider instrumenting your OS interface layer and making sure those bytes really are being read at the beginning of every transaction and written at the end of every transaction. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
RE: [sqlite] Multiple connections - stale cache?
Can you list all the compile flags you used to compile the sqlite3 library (including all -DOMIT_* defines)? --- Mark Brown <[EMAIL PROTECTED]> wrote: > Thanks for the quick reply. Unfortunately, we are developing code on the > vxWorks platform, so I don't think sample code would be of use. We have > seen the problem for some time now...at least from 3.3.12. The submitter of > the ticket appears to have the exact same scenario as us. Hopefully he can > submit same sample code that will help you track down the problem. > > Would there be any diagnostics (i.e. printfs) that I could enable that might > be of use? > > > > > See ticket #2458. > > > >http://www.sqlite.org/cvstrac/tktview?tn=2458 > > > > If you can get us a reproducible test case, that will be much > > appreciated. Moody friends. Drama queens. Your life? Nope! - their life, your story. Play Sims Stories at Yahoo! Games. http://sims.yahoo.com/ - To unsubscribe, send email to [EMAIL PROTECTED] -
RE: [sqlite] Multiple connections - stale cache?
Thanks for the quick reply. Unfortunately, we are developing code on the vxWorks platform, so I don't think sample code would be of use. We have seen the problem for some time now...at least from 3.3.12. The submitter of the ticket appears to have the exact same scenario as us. Hopefully he can submit same sample code that will help you track down the problem. Would there be any diagnostics (i.e. printfs) that I could enable that might be of use? > > See ticket #2458. > >http://www.sqlite.org/cvstrac/tktview?tn=2458 > > If you can get us a reproducible test case, that will be much > appreciated. > > -- > D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Worked perfectly!
litenoob wrote: -- #/bin/sh ROW_ID=`sqlite3 test.db < FYI, the last part of the select is superfluous. You can simply do this: insert into t values(1,2); select last_insert_rowid(); HTH Dennis Cote - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] [Delphi] Escaping quote?
John Elrick wrote: > // Input := 'Let's meet at the pub tonight!'; > MyFormat := 'insert into stuff (title) values (%s)'; > SQL := Format(MyFormat, QuotedStr(Input)); > > try > ASQLite3DB1.Database := db; > ASQLite3DB1.DefaultDir := ExtractFileDir(Application.ExeName); > ASQLite3DB1.Open; > > ASQLite3DB1.SQLite3_ExecSQL(SQL); > ASQLite3DB1.Close; > except > ShowMessage('Bad'); > end; A much better solution than QuotedStr is to use queries with parameters. If you're going to be running the query multiple times it also gives you a speed boost. Clay -- Simple Content Management http://www.ceamus.com - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] [Delphi] Escaping quote?
Ralf Junker wrote: I'm having a problem saving strings into a colum from a Delphi application because they might contain the ( ' ) single quote character: Is there a function I should call either in SQLite or Delphi before running the SQL query? Why don't you use the '%q' operator of SQLite's sqlite3_mprintf? You can use sqlite3_mprintf with Delphi 6 or later. It is interfaced as a varargs function in DISQLite3, for example (http://www.yunqa.de/delphi/sqlite3/). The varargs directive allows to pass a variable number of arguments to sqlite3_mprintf, similar to Delphi's array of const declaration. Here is a Delphi example: //-- program SQLite3_printf; {$APPTYPE CONSOLE} uses DISQLite3Api; var Input: PAnsiChar; begin Input := 'Let''s meet at the pub tonight!'; WriteLn('sqlite3_mprintf:'); WriteLn(sqlite3_mprintf('insert into stuff (title) values (''%q'')', Input)); WriteLn; WriteLn; WriteLn('Done - Press ENTER to Exit'); ReadLn; end. //-- This is the relevant section from the sqlite3_mprintf C documentation: The %q option works like %s in that it substitutes a null-terminated string from the argument list. But %q also doubles every '\'' character. %q is designed for use inside a string literal. By doubling each '\'' character it escapes that character and allows it to be inserted into the string. For example, so some string variable contains text as follows: char *zText = "It's a happy day!"; One can use this text in an SQL statement as follows: char *zSQL = sqlite3_mprintf("INSERT INTO table VALUES('%q')", zText); sqlite3_exec(db, zSQL, 0, 0, 0); sqlite3_free(zSQL); Because the %q format string is used, the '\'' character in zText is escaped and the SQL generated is as follows: INSERT INTO table1 VALUES('It''s a happy day!'); Question, does the %q operator offer any advantages over calling QuotedStr ? John Elrick - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Multiple connections - stale cache?
"Mark Brown" <[EMAIL PROTECTED]> wrote: > Hi- > > We have a scenario where we have two different database connections to the > same database. Each database connection is running on a separate thread, > but in this situation, we are only using one connection at a time. We are > finding that sometimes one database connection will do a "select" on the > table and not get the correct value that the other database connection > recently updated. See ticket #2458. http://www.sqlite.org/cvstrac/tktview?tn=2458 If you can get us a reproducible test case, that will be much appreciated. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] In Mem Query Performance
Ok. Will notify u once i complete the test. regards ragha ** This email and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it! * - Original Message - From: Joe Wilson <[EMAIL PROTECTED]> Date: Wednesday, June 27, 2007 0:48 am Subject: Re: [sqlite] In Mem Query Performance > :memory: databases only have a page size of 1024. > > Try various page_size settings for a file based database file and > see what happens. > > I have no other suggestions. > > --- RaghavendraK 70574 <[EMAIL PROTECTED]> wrote: > > It is a server platform,Linux SuSE9 enterpraise edition. > > 4 CPU machine,8GB ram. > > We want load all the tables in to mem db of Sqlite.Achieve > > read performance of upto 5records/sec for the table data i > had mentioned earlier. > > > > "so it would have to be file based." > > I could not get it. Does it mean even increasing the page size > there would be > > no effect on the performance? > > > > > > It's here! Your new message! > Get new email alerts with the free Yahoo! Toolbar. > http://tools.search.yahoo.com/toolbar/features/mail/ > > --- > -- > To unsubscribe, send email to [EMAIL PROTECTED] > --- > -- > > - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] In Mem Query Performance
:memory: databases only have a page size of 1024. Try various page_size settings for a file based database file and see what happens. I have no other suggestions. --- RaghavendraK 70574 <[EMAIL PROTECTED]> wrote: > It is a server platform,Linux SuSE9 enterpraise edition. > 4 CPU machine,8GB ram. > We want load all the tables in to mem db of Sqlite.Achieve > read performance of upto 5records/sec for the table data i had mentioned > earlier. > > "so it would have to be file based." > I could not get it. Does it mean even increasing the page size there would be > no effect on the performance? It's here! Your new message! Get new email alerts with the free Yahoo! Toolbar. http://tools.search.yahoo.com/toolbar/features/mail/ - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Multiple connections - stale cache?
Hi- We have a scenario where we have two different database connections to the same database. Each database connection is running on a separate thread, but in this situation, we are only using one connection at a time. We are finding that sometimes one database connection will do a "select" on the table and not get the correct value that the other database connection recently updated. The connections are obtained at the beginning of the program and not closed until the very end. Essentially, we run a series of tests that are successful, then we run the same tests again, this time failing. DBC1 - begin transaction DBC1 - sets value to 333000 DBC1 - commit transaction DBC1 - gets value out, is 333000 DBC2 - gets value out, is 333000 DBC2 - begin transaction DBC2 - sets value to 35 DBC2 - commit transaction DBC2 - gets value out, is 35 DBC2 - sets value to 350100 DBC2 - gets value out, is 350100 *Repeat test* DBC1 - begin transaction DBC1 - sets value to 333000 DBC1 - commit transaction DBC1 - gets value out, is 333000 DBC2 - gets value out, is 350100 I have noticed that if we make a call to sqlite3_enable_shared_cache just prior to each database connection being opened, the end result is that DBC2 will get the correct value at the beginning of the second test. Not really knowing the underlying details, it almosts looks like DBC2 doesn't realize the db was changed and uses what it last thought the db held for that value. Any thoughts on why we are seeing what we see? I could add the call to enable the shared cache, but I really want to understand why it doesn't work without it. Thanks, Mark
Re: [sqlite] [Delphi] Escaping quote?
>I'm having a problem saving strings into a colum from a Delphi application >because they might contain the ( ' ) single quote character: > >Is there a function I should call either in SQLite or Delphi before running >the SQL query? Why don't you use the '%q' operator of SQLite's sqlite3_mprintf? You can use sqlite3_mprintf with Delphi 6 or later. It is interfaced as a varargs function in DISQLite3, for example (http://www.yunqa.de/delphi/sqlite3/). The varargs directive allows to pass a variable number of arguments to sqlite3_mprintf, similar to Delphi's array of const declaration. Here is a Delphi example: //-- program SQLite3_printf; {$APPTYPE CONSOLE} uses DISQLite3Api; var Input: PAnsiChar; begin Input := 'Let''s meet at the pub tonight!'; WriteLn('sqlite3_mprintf:'); WriteLn(sqlite3_mprintf('insert into stuff (title) values (''%q'')', Input)); WriteLn; WriteLn; WriteLn('Done - Press ENTER to Exit'); ReadLn; end. //-- This is the relevant section from the sqlite3_mprintf C documentation: The %q option works like %s in that it substitutes a null-terminated string from the argument list. But %q also doubles every '\'' character. %q is designed for use inside a string literal. By doubling each '\'' character it escapes that character and allows it to be inserted into the string. For example, so some string variable contains text as follows: char *zText = "It's a happy day!"; One can use this text in an SQL statement as follows: char *zSQL = sqlite3_mprintf("INSERT INTO table VALUES('%q')", zText); sqlite3_exec(db, zSQL, 0, 0, 0); sqlite3_free(zSQL); Because the %q format string is used, the '\'' character in zText is escaped and the SQL generated is as follows: INSERT INTO table1 VALUES('It''s a happy day!'); - To unsubscribe, send email to [EMAIL PROTECTED] -
RE: [sqlite] LoadExtentions can't open DB
I don't think the current build of SQLiteDatabaseBrowser has any FTS support, so it needs to be upto date. Daniel A. White { Kent State University: Computer Science major } { JMC TechHelp: Taylor Hall, server techie } { E-mail: [EMAIL PROTECTED] } { Colossians 3:17 } -Original Message- From: Andre du Plessis [mailto:[EMAIL PROTECTED] Sent: Tuesday, June 26, 2007 8:27 AM To: sqlite-users@sqlite.org Subject: [sqlite] LoadExtentions can't open DB I have been testing FTS2 and it is awesome I must say, hope that the project will keep going, I have this problem though: Once load extentions is enabled and fts2 is enabled, I cannot see anything in the DB anymore when I open it in SQLiteDatabaseBrowser. I CAN open it though, just cant see anything. It is the application I use to administer the DB. Any idea why or how to get it to work? - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] In Mem Query Performance
Hello Joe, It is a server platform,Linux SuSE9 enterpraise edition. 4 CPU machine,8GB ram. We want load all the tables in to mem db of Sqlite.Achieve read performance of upto 5records/sec for the table data i had mentioned earlier. "so it would have to be file based." I could not get it. Does it mean even increasing the page size there would be no effect on the performance? regards ragha ** This email and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it! * - Original Message - From: Joe Wilson <[EMAIL PROTECTED]> Date: Tuesday, June 26, 2007 11:57 pm Subject: Re: [sqlite] In Mem Query Performance > --- RaghavendraK 70574 <[EMAIL PROTECTED]> wrote: > > Your input is valuable. I will increase the pg size to 4KB and > check. > :memory: databases only use 1024 byte pages if I remember > correctly, > so it would have to be file based. > > > We just have a set of tables which is to be read on startup.No > complex Query is involved. > > I find Sqlite to be most powerful given the size and complexity > it handles. > > > > I use the following apis to create the > > > > int ret = sqlite3_open(":memory:",&m_sqliteDb1); > > also use for all temp tables, > > pragma PRAGMA temp_store = MEMORY > > That looks fine. It ought to be fast. Although "fast" is a > relative term. > > Is this an embedded platform or a PC that you're using? > > > > > Sick sense of humor? Visit Yahoo! TV's > Comedy with an Edge to see what's on, when. > http://tv.yahoo.com/collections/222 > > --- > -- > To unsubscribe, send email to [EMAIL PROTECTED] > --- > -- > > - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] [Delphi] Escaping quote?
Gilles Ganault wrote: Hello I'm having a problem saving strings into a colum from a Delphi application because they might contain the ( ' ) single quote character: = // Input := 'Let's meet at the pub tonight!'; MyFormat := 'insert into stuff (title) values ('''%s')'; SQL := Format(MyFormat, Input); try ASQLite3DB1.Database := db; ASQLite3DB1.DefaultDir := ExtractFileDir(Application.ExeName); ASQLite3DB1.Open; ASQLite3DB1.SQLite3_ExecSQL(SQL); ASQLite3DB1.Close; except ShowMessage('Bad'); end; = Is there a function I should call either in SQLite or Delphi before running the SQL query? // Input := 'Let's meet at the pub tonight!'; MyFormat := 'insert into stuff (title) values (%s)'; SQL := Format(MyFormat, QuotedStr(Input)); try ASQLite3DB1.Database := db; ASQLite3DB1.DefaultDir := ExtractFileDir(Application.ExeName); ASQLite3DB1.Open; ASQLite3DB1.SQLite3_ExecSQL(SQL); ASQLite3DB1.Close; except ShowMessage('Bad'); end; John - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] In Mem Query Performance
--- RaghavendraK 70574 <[EMAIL PROTECTED]> wrote: > Your input is valuable. I will increase the pg size to 4KB and check. :memory: databases only use 1024 byte pages if I remember correctly, so it would have to be file based. > We just have a set of tables which is to be read on startup.No complex Query > is involved. > I find Sqlite to be most powerful given the size and complexity it handles. > > I use the following apis to create the > > int ret = sqlite3_open(":memory:",&m_sqliteDb1); > also use for all temp tables, > pragma PRAGMA temp_store = MEMORY That looks fine. It ought to be fast. Although "fast" is a relative term. Is this an embedded platform or a PC that you're using? Sick sense of humor? Visit Yahoo! TV's Comedy with an Edge to see what's on, when. http://tv.yahoo.com/collections/222 - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] [Delphi] Escaping quote?
Hello I'm having a problem saving strings into a colum from a Delphi application because they might contain the ( ' ) single quote character: = // Input := 'Let's meet at the pub tonight!'; MyFormat := 'insert into stuff (title) values ('''%s')'; SQL := Format(MyFormat, Input); try ASQLite3DB1.Database := db; ASQLite3DB1.DefaultDir := ExtractFileDir(Application.ExeName); ASQLite3DB1.Open; ASQLite3DB1.SQLite3_ExecSQL(SQL); ASQLite3DB1.Close; except ShowMessage('Bad'); end; = Is there a function I should call either in SQLite or Delphi before running the SQL query? Thank you. - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] In Mem Query Performance
Hi Joe, Your input is valuable. I will increase the pg size to 4KB and check. We just have a set of tables which is to be read on startup.No complex Query is involved. I find Sqlite to be most powerful given the size and complexity it handles. I use the following apis to create the int ret = sqlite3_open(":memory:",&m_sqliteDb1); also use for all temp tables, pragma PRAGMA temp_store = MEMORY PLs let me know if this is correct. regrads ragha ** This email and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it! * - Original Message - From: Joe Wilson <[EMAIL PROTECTED]> Date: Tuesday, June 26, 2007 10:58 pm Subject: Re: [sqlite] In Mem Query Performance > --- RaghavendraK 70574 <[EMAIL PROTECTED]> wrote: > > Thanks for the suggestion. But with that performance went down > by 25% further. > > Pls suggest an alternative. Mr DRH says it is possible we can > reach up to a million,if there is > > a way pls notify. > ... > > We are using Sqlite in "in Memory Mode" and we have around 200 > tables.> Each table has 10 columns of type text. > > Each table has around 1 records each column has around > 128bytes data. > > Select performance is around 2000records/sec. Pls suggest if > there is a way > > to improve further. > > > > Table structure,Query style is as below, > > > > create table test1 ...200 > > ( > > key0 text, > > key1 text, > > key2 text, > > key3 text, > > key4 text, > > nonKey0 text, > > nonKey1 text, > > nonKey2 text, > > nonKey3 text, > > nonKey4 text, > > primary key(key0,key1,key2,key3,key4,key5) > > ); > > > > Query Used.. > > > > SELECT * FROM TABLE136 WHERE > > > key0='kk> > kk490' AND > > > key1='kk> > kk491' AND > > > key2='kk> > kk492' AND > > > key3='kk> > kk493' AND > > > key4='kk> > kk494' > > If your tables have 10 columns of 128 bytes each, then each table > row is > over 1280 bytes, which exceeds a memory page size (1024), so > overflow > pages are used. You might try a file-based database with a bigger > page_size,say 8192. > > Judging by you example queries, your keys vary only after the > 120th byte > or so. That may play a role in the lack of speed. Try putting the > differentiating characters first in the key strings. > > Are your slow query really only looking at a single table, or do > they do > a multiple table joins? > > How do you create your memory database? > Maybe you're not making a memory database as you think you are. > > > > > > Be a better Globetrotter. Get better travel answers from someone > who knows. Yahoo! Answers - Check it out. > http://answers.yahoo.com/dir/?link=list&sid=396545469 > > --- > -- > To unsubscribe, send email to [EMAIL PROTECTED] > --- > -- > > - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] API enhancement proposal
I'd like to propose the following simple piece of code be added to sqlite. I believe it has some benefits for those who've wrapped the sqlite api's keeping copies in memory of the sql statement being executed. Add an api call that will return the saved SQL if using sqlite3_prepare_v2 or NULL if using sqlite3_prepare: char * sqlite3_stmt_sql(sqlite3_stmt* pStmt ) { return( sqlite3VdbeGetSql( (Vdbe *) pStmt) ); } I Kenneth Long, Contribute the above to the public domain.
Re: [sqlite] In Mem Query Performance
--- RaghavendraK 70574 <[EMAIL PROTECTED]> wrote: > Thanks for the suggestion. But with that performance went down by 25% further. > Pls suggest an alternative. Mr DRH says it is possible we can reach up to a > million,if there is > a way pls notify. ... > We are using Sqlite in "in Memory Mode" and we have around 200 tables. > Each table has 10 columns of type text. > Each table has around 1 records each column has around 128bytes data. > Select performance is around 2000records/sec. Pls suggest if there is a way > to improve further. > > Table structure,Query style is as below, > > create table test1 ...200 > ( > key0 text, > key1 text, > key2 text, > key3 text, > key4 text, > nonKey0 text, > nonKey1 text, > nonKey2 text, > nonKey3 text, > nonKey4 text, > primary key(key0,key1,key2,key3,key4,key5) > ); > > Query Used.. > > SELECT * FROM TABLE136 WHERE > key0='kk > kk490' AND > key1='kk > kk491' AND > key2='kk > kk492' AND > key3='kk > kk493' AND > key4='kk > kk494' If your tables have 10 columns of 128 bytes each, then each table row is over 1280 bytes, which exceeds a memory page size (1024), so overflow pages are used. You might try a file-based database with a bigger page_size, say 8192. Judging by you example queries, your keys vary only after the 120th byte or so. That may play a role in the lack of speed. Try putting the differentiating characters first in the key strings. Are your slow query really only looking at a single table, or do they do a multiple table joins? How do you create your memory database? Maybe you're not making a memory database as you think you are. Be a better Globetrotter. Get better travel answers from someone who knows. Yahoo! Answers - Check it out. http://answers.yahoo.com/dir/?link=list&sid=396545469 - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] pragma page_count
On 6/26/07, Dan Kennedy <[EMAIL PROTECTED]> wrote: Compile, test, debug ... contribute. :)
[sqlite] LoadExtentions can't open DB
I have been testing FTS2 and it is awesome I must say, hope that the project will keep going, I have this problem though: Once load extentions is enabled and fts2 is enabled, I cannot see anything in the DB anymore when I open it in SQLiteDatabaseBrowser. I CAN open it though, just cant see anything. It is the application I use to administer the DB. Any idea why or how to get it to work?
Re: [sqlite] where all indexing is used?
"B V, Phanisekhar" <[EMAIL PROTECTED]> wrote: > Assume a table > > create table Title ( >Id INTEGER PRIMARY KEY, >Titlename BLOB > ) > create unique index TitleIdx ON Title (Titlename) > > For which all queries index "TitleIdx" will be used? > > (1) select Titlename from Title order by Titlename > (2) select Titlename from Title where id in (...) > order by Titlename > (3) select Titlename from Title order by Titlename > limit 3 offset 10 > I believe the answer is (1) and (3). You can add the phrase "explain query plan" before each query and the output will show you which index is used. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] where all indexing is used?
Assume a table "create table if not exists Title (Id INTEGER PRIMARY KEY, Titlename BLOB)" "create unique index if not exists TitleIdx ON Title (Titlename)" For which all queries index "TitleIdx" will be used? * select Titlename from Title order by Titlename * select Titlename from Title where id in (...) order by Titlename * select Titlename from Title order by Titlename limit = 3 offset = 10 Regards, Phani