Re: [sqlite] Intro SQL ?
On Sep 14, 2005, at 12:56 AM, Richard Nagle wrote: Well, now I ready for the next step, looking for a intro into SQL... probably the most fun one out there is http://www.sqlzoo.net/ plus, it lets you practice against real instances of different rdbs. been searching google all night, found hundreds of sites, that start off with a table, but nothing showing, how to create a database, and place it in a directory, or in the same folder, with the sql application. the concept of placing a db in a directory is applicable only to file-based databases such as SQLite, Filemaker, or MS-Access. In most db, you have little choice of where the stuff goes. Since you are here on a SQLite list, and experimenting with SQLite, the following should suffice [G4:~] you% /usr/local/bin/sqlite3 yourdb.db that will create a db called yourdb.db under your home directory. After that, consult the basic SQLite documentation... look at the help files, and the wiki. Looking for a intro, that would show commands, to list databases tables that you just created... or allow you to see, into other folders. -- Puneet Kishor
[sqlite] Command line interface SQLITE 2.8.16
Hi, I'm trying to execute a command in a DOS batchfile , but I get a syntax error. Is this supported in 2.8.16 ? ( sqlite mydbase.db select * from table1; ) PS. When doing firm sqlite mydbase enter select ...enter it works ) Regards Norbert
RE: [sqlite] Intro SQL ?
Richard, Not everything in life is free or immediate. If you're going to do some decent SQL you need to invest in a reference manual - my favourite SQL primer is The Practical SQL Handbook from Addison-Wesley Publishing ISBN 0-201-62623-3 My edition is acient but well worth the 20 quid I paid for it. Steve To learn something, you need a book - to know something, you need to try and teach it! -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] rg] On Behalf Of Richard Nagle Sent: 14 September 2005 06:56 To: sqlite-users@sqlite.org Subject: [sqlite] Intro SQL ? Well, now I ready for the next step, looking for a intro into SQL... been searching google all night, found hundreds of sites, that start off with a table, but nothing showing, how to create a database, and place it in a directory, or in the same folder, with the sql application. Looking for a intro, that would show commands, to list databases tables that you just created... or allow you to see, into other folders. Again, looking for the intro to the intro of SQL. where we start with the very basic of basic. So, does anyone have a few links... Major Thanks - Rick
RE: [sqlite] Network-based DB performance for Mozilla
If you can't tolerate the delays accessing the database over the network, can you make a copy of the database in a temp directory on the local machine on startup. If you copy the file after you lock the profile it should be safe to copy down to local storage. Then use the local database while the application runs, and finally copy the database back to the server, if it has been modified, when the application quits. Just to add, this is similar to how Roaming Profiles (corporate environment) work on Windows Regards Nick This email and any attachments are confidential to the intended recipient and may also be privileged. If you are not the intended recipient please delete it from your system and notify the sender. You should not copy it or use it for any purpose nor disclose or distribute its contents to any other person.
[sqlite] methods to improve insertion speed with SQLite
Hello- In my application, I perform large number of insertions (~100K) to create a SQLite database. The method which I currently have is following: - Start a transaction - Insert into database (~100K inserts) - During the insert process, I also have to query the already inserted elements in the database to establish foreign-key constraints in my schema. My schema is like a graph structure, with many foreign keys relationships - Close transaction I am wondering if there techniques/tricks which can improve the total insertion speed of my application. Any suggestions/feedback? Vivek
Re: [sqlite] methods to improve insertion speed with SQLite
Hello Vivek, I have a very similar application, without the foreign key constraints, however. If you use sqlite3_prepare() once for your statement, then sqlite3_bind_...() with every call, and if you wrap all the inserts int one transaction (seems that you do), your speed schould be optimal. However, I would very much like a bulk insert - call to sqlite (Oracle OCI does this, for example), where i can put many (thousands) of records into the database with one call. Is there any chance of something like this ever to be added to sqlite? Martin Rajan, Vivek K schrieb: Hello- In my application, I perform large number of insertions (~100K) to create a SQLite database. The method which I currently have is following: - Start a transaction - Insert into database (~100K inserts) - During the insert process, I also have to query the already inserted elements in the database to establish foreign-key constraints in my schema. My schema is like a graph structure, with many foreign keys relationships - Close transaction I am wondering if there techniques/tricks which can improve the total insertion speed of my application. Any suggestions/feedback? Vivek
[sqlite] Tuning a SELECT DISTINCT query?
Hi all, I have a performance issue with the following query: SELECT DISTINCT name FROM customproperty ORDER BY name; The table and an index is defined by: CREATE TABLE customproperty (parentID INTEGER NOT NULL, name VARCHAR(100) NOT NULL, val VARCHAR(100)) CREATE INDEX ixCpN ON customproperty (name) The pair(parentID, name) is unique, but name is not. The table customproperty does get quite large, the number of distinct names is much smaller. From reading the VDBE code and running sqlite through a profiler, I figured the select distinct query is executed this way: - read every row in the table. (*1) - every row is checked if it contains a new name. Hence, I end up with R read operations and R * log D comparison operations. (R is the number of rows in customproperty, D is the number of distinct names in customproperty) So, it seems sqlite 3.2.5 does not use the index to speedup DISTINCT queries. My question to you: How can SQL statements be reformulated to improve the select distinct performance? For example through better usage of the information in the index? Thanks Ralf Remark: (*1) Actually, the cursor iterates over the index if 'ORDER BY name'
Re: [sqlite] Network-based DB performance for Mozilla
To add my 2p worth to all this I am not fully aware of the Firefox internals, buts as far as my understanding goes only one process can use any profile at any time. If two instances of firefox are fired up for the same user (+ profile) then what happens as far as I understand is another thread is started of the first process for the second instance. The way I would tackle replacing profile data with SQLite is to enforce single processes per profile with a lock file (I believe this is already done), and on start of (the first instance of) Firefox to fire up a special database thread that opens the database exclusively - thus ensuring no on-the-fly locking is required probably taking care of performance issues. Any accessing of the database by Firefox is then done by passing requests to this special thread (using mutexes, waits and signals and a global area), the thread then retrieves/updates the data and passes the result back to the calling thread. Although this is slightly more complex than otherwise, ot is not much more. It also should increase performance (no on-the-fly locking as only one connection is made), increases stability as passing multiple queries through a single connection in an embedded database is really a no-no and does the functions required. Yours Eddy Brett Wilson wrote: Hi everybody, I'm working on replacing a lot of Firefox's storage systems to use sqlite. It has been going well so far except for one issue. The database file is stored in the user's Mozilla profile directory. In companies and Universities using Linux, this directory is often hosted over the network. In these cases, sqlite performance can be quite poor. I am aware that writing performance can be bad in these cases, but we don't do too many writes. I am mostly concerned about read performance, since a lot of this stuff (history, bookmarks) drive the UI. The lag, even over a fast network, can be noticable. I am also concerned about file locking, since the documentation does not recommend using files over the network. Can anybody suggest what to do about this problem? This is not something that can be avoided, since some people will have this configuration and will not have any say about it. Firefox must perform reasonably in these cases. One thing that could work in our favor is that Mozilla already does locking on the profile, so access will be restricted to our one process. Is there anything that we can do to take advantage of this to avoid having to query the file for reads even when the content is cached? It looks like we will have multiple database connections from this process. I will work on minimizing the number of queries in the common cases, but any little bit of perfomance will help here. Thanks, Brett
Re: [sqlite] Network-based DB performance for Mozilla
Please excuse me doing another reply to this but Sorry about this daft question, but you have indexed everythink OK and designed the database to a reasonable BCNF(Boyce Codd Normal Form) model havn't you? Eddy Brett Wilson wrote: Hi everybody, I'm working on replacing a lot of Firefox's storage systems to use sqlite. It has been going well so far except for one issue. The database file is stored in the user's Mozilla profile directory. In companies and Universities using Linux, this directory is often hosted over the network. In these cases, sqlite performance can be quite poor. I am aware that writing performance can be bad in these cases, but we don't do too many writes. I am mostly concerned about read performance, since a lot of this stuff (history, bookmarks) drive the UI. The lag, even over a fast network, can be noticable. I am also concerned about file locking, since the documentation does not recommend using files over the network. Can anybody suggest what to do about this problem? This is not something that can be avoided, since some people will have this configuration and will not have any say about it. Firefox must perform reasonably in these cases. One thing that could work in our favor is that Mozilla already does locking on the profile, so access will be restricted to our one process. Is there anything that we can do to take advantage of this to avoid having to query the file for reads even when the content is cached? It looks like we will have multiple database connections from this process. I will work on minimizing the number of queries in the common cases, but any little bit of perfomance will help here. Thanks, Brett
Re: [sqlite] Command line interface SQLITE 2.8.16
On 9/14/05, Sijm, Norbert [EMAIL PROTECTED] wrote: Hi, I'm trying to execute a command in a DOS batchfile , but I get a syntax error. Is this supported in 2.8.16 ? ( sqlite mydbase.db select * from table1; ) Try enclosing your SQL statements in double quotes, like this: sqlite mydbase.db select * from table1; -- Nemanja Corlija [EMAIL PROTECTED]
Re: [sqlite] methods to improve insertion speed with SQLite
Hello You can also speed up the inserts when creating the index after the inserts. To check the constraints you could use QDBM (http://qdbm.sourceforge.net/). Rajan, Vivek K wrote: Hello- In my application, I perform large number of insertions (~100K) to create a SQLite database. The method which I currently have is following: - Start a transaction - Insert into database (~100K inserts) - During the insert process, I also have to query the already inserted elements in the database to establish foreign-key constraints in my schema. My schema is like a graph structure, with many foreign keys relationships - Close transaction I am wondering if there techniques/tricks which can improve the total insertion speed of my application. Any suggestions/feedback? Vivek
Re: [sqlite] methods to improve insertion speed with SQLite
On 9/14/05, Rajan, Vivek K [EMAIL PROTECTED] wrote: I am wondering if there techniques/tricks which can improve the total insertion speed of my application. Any suggestions/feedback? Remove indexes from the tables you're inserting into. If you need that data for queries later you can create a 'data warehouse' . You later copy the data you've collected to separate tables, index it as necessary, then you can do your queries/reporting against that table. Doesn't work for everyone, but it's sometimes very useful when up to the second data isn't required. --- The Castles of Dereth Calendar: a tour of the art and architecture of Asheron's Call http://www.lulu.com/content/77264
[sqlite] create database file at runtime
Hi everyone, Two quick questions: 1) Is there anything special the supplied sqlite3.exe does to create a database file. I want to programatically create a database file at runtime, so can I just create an empty xxx.db file, and go ahead with create table statements to populate it? The documentation says to create a sqlite database you should use the supplied app (sqlite3.exe). Is just doing that myself ok? 2) While creating tables, I've been specifying the data type after the field name: create table test(field_1 integer, field_2 text); The only advantage of this is that the database could possible store the data in a more compact form, correct? If that's not a concern, leaving it all as text would not make a difference? Thanks, Mark
[sqlite] Column Naming Convention (yet again)
Hello, It's that time of the month again - column naming convention discussion. The documentation states quite clearly: Case 4: short_column_names=OFF and full_column_names=OFF For this case (which was the default prior to version 3.1.0) the result is the same as for case 2 for simple queries and is the same as case 5 for joins. In other words, for a query with only a single table in the FROM clause, the name of the result set column matches the text of the expression that defined the column as in case 2. For a join, the column name is of the form TABLE.COLUMN where TABLE and COLUMN are the names of the table and column from which the data comes, as defined in the original CREATE TABLE statement. Now this is great, but (and this isn't the first time I've mentioned it) CREATE TABLE TEST1 (a INTEGER, b TEXT, PRIMARY KEY (a)) CREATE TABLE TEST2 (a INTEGER, b TEXT, PRIMARY KEY (a)) SELECT T1.a, T1.b, T2.a, T2.b FROM TEST1 T1, TEST2 T2 WHERE T1.a = T2.a --- this is fine and returns the column names as written SELECT T1.*, T2.* FROM TEST1 T1, TEST2 T2 WHERE T1.a = T2.a --- this STILL returns without the table-alias prefix. In my mind these statements both fall into the category mentioned above. Can I ask what the logic is here? (this page doesn't seem to cover it: http://www.sqlite.org/cvstrac/wiki?p=ColumnNames) Following the upgrade to 3.2.5 (on the basis this had been fixed), I do not want to tell my users to expand their T1.* to include all the column names, as this is a pain when the tables have hundreds of columns. Any thoughts appreciated. Regards, Steve
Re: [sqlite] create database file at runtime
On 9/14/05, Mark Wyszomierski [EMAIL PROTECTED] wrote: Hi everyone, Two quick questions: 1) Is there anything special the supplied sqlite3.exe does to create a database file. I want to programatically create a database file at runtime, so can I just create an empty xxx.db file, and go ahead with create table statements to populate it? The documentation says to create a sqlite database you should use the supplied app (sqlite3.exe). Is just doing that myself ok? See http://www.sqlite.org/capi3ref.html#sqlite3_open If you're not using C API directly, look up the docs for your wrapper, but in general db file should be created for you when you call open() method of your wrapper with filename that doesn't exist yet. 2) While creating tables, I've been specifying the data type after the field name: create table test(field_1 integer, field_2 text); The only advantage of this is that the database could possible store the data in a more compact form, correct? If that's not a concern, leaving it all as text would not make a difference? Data type affinity is used at runtime too (comparison for example). See http://www.sqlite.org/datatype3.html for more details. -- Nemanja Corlija [EMAIL PROTECTED]
RE: [sqlite] methods to improve insertion speed with SQLite
However, I would very much like a bulk insert - call to sqlite (Oracle OCI does this, for example), where i can put many (thousands) of records into the database with one call. Is there any chance of something like this ever to be added to sqlite? I can't speak authoritatively, but I don't see what the impact would be. Adding rows in bulk, with other databases, is mostly about transferring the data more optimally and/or avoiding the SQL processing engine. Both of those problems have already been solved by SQLite - you can't get any more optimal than intra-process communication, and given a prepared INSERT statement that is executed repeatedly, there is no SQL processing engine involved. It might be convenient to be able to provide arrays of inputs to a prepared statement, but the only thing you'd gain performance-wise is eliminating the function call overhead of all calls past the first, so even that doesn't provide any serious benefit. -Tom
Re: [sqlite] methods to improve insertion speed with SQLite
Hello Thomas, you are right in principle: The gain should be not too great. However, the number of calls to sqlite3_bind_text is number of records * nuber of columns, which is *very* high. Also, i suspect that sqlite3_bind_text makes a copy of the text i pass, which could be eliminated too. Or am i wrong there? Martin Thomas Briggs wrote: However, I would very much like a bulk insert - call to sqlite (Oracle OCI does this, for example), where i can put many (thousands) of records into the database with one call. Is there any chance of something like this ever to be added to sqlite? I can't speak authoritatively, but I don't see what the impact would be. Adding rows in bulk, with other databases, is mostly about transferring the data more optimally and/or avoiding the SQL processing engine. Both of those problems have already been solved by SQLite - you can't get any more optimal than intra-process communication, and given a prepared INSERT statement that is executed repeatedly, there is no SQL processing engine involved. It might be convenient to be able to provide arrays of inputs to a prepared statement, but the only thing you'd gain performance-wise is eliminating the function call overhead of all calls past the first, so even that doesn't provide any serious benefit. -Tom
[sqlite] Re: methods to improve insertion speed with SQLite
Martin Engelschalk wrote: Also, i suspect that sqlite3_bind_text makes a copy of the text i pass, which could be eliminated too. Or am i wrong there? Only if you are using SQLITE_TRANSIENT flag, if I understand correctly. Use SQLITE_STATIC or an actual destructor callback. Igor Tandetnik
RE: [sqlite] Column Naming Convention (yet again)
I've fixed this in my local copy, but sadly I am completely useless at CVS etcso here's what I did. Please let me know if anyone suspects this is dodgy or plain wrong. FILE: select.c, LINE: 1136 Changed: if( longNames ) To: if( longNames || pTabList-nSrc1 ){ Many thanks, Steve -Original Message- From: Dennis Cote [mailto:[EMAIL PROTECTED] Sent: 14 September 2005 16:04 To: sqlite-users@sqlite.org Subject: Re: [sqlite] Column Naming Convention (yet again) Drew, Stephen wrote: Hello, It's that time of the month again - column naming convention discussion. The documentation states quite clearly: Case 4: short_column_names=OFF and full_column_names=OFF For this case (which was the default prior to version 3.1.0) the result is the same as for case 2 for simple queries and is the same as case 5 for joins. In other words, for a query with only a single table in the FROM clause, the name of the result set column matches the text of the expression that defined the column as in case 2. For a join, the column name is of the form TABLE.COLUMN where TABLE and COLUMN are the names of the table and column from which the data comes, as defined in the original CREATE TABLE statement. Now this is great, but (and this isn't the first time I've mentioned it) CREATE TABLE TEST1 (a INTEGER, b TEXT, PRIMARY KEY (a)) CREATE TABLE TEST2 (a INTEGER, b TEXT, PRIMARY KEY (a)) SELECT T1.a, T1.b, T2.a, T2.b FROM TEST1 T1, TEST2 T2 WHERE T1.a = T2.a --- this is fine and returns the column names as written SELECT T1.*, T2.* FROM TEST1 T1, TEST2 T2 WHERE T1.a = T2.a --- this STILL returns without the table-alias prefix. In my mind these statements both fall into the category mentioned above. Can I ask what the logic is here? (this page doesn't seem to cover it: http://www.sqlite.org/cvstrac/wiki?p=ColumnNames) Following the upgrade to 3.2.5 (on the basis this had been fixed), I do not want to tell my users to expand their T1.* to include all the column names, as this is a pain when the tables have hundreds of columns. Any thoughts appreciated. Regards, Steve Steve, You are correct. Something is still broken, either the code or the docs. I think it's the code. These two statements should produce the same column headings, and since they are joining two tables, they should both have full (table.column) column names. Dennis Cote
Re: [sqlite] Re: methods to improve insertion speed with SQLite
YES! Thank you very much, Igor. I overlooked that. Now my application runs a little bit faster (6%) Igor Tandetnik schrieb: Martin Engelschalk wrote: Also, i suspect that sqlite3_bind_text makes a copy of the text i pass, which could be eliminated too. Or am i wrong there? Only if you are using SQLITE_TRANSIENT flag, if I understand correctly. Use SQLITE_STATIC or an actual destructor callback. Igor Tandetnik
[sqlite] Re: Thread safety guarantees
Hello, if I understand the whole issue correctly, there is only one reason why it's not safe to use a sqlite handle in any thread of a process (as long as only one thread at a time uses it): File locks are tied to the thread that created them (except this special Redhat version). That's why on Windows everything's safe, since file locks are per-process there. If that's correct, couldn't this problem be solved by letting sqlite use a dedicated thread to do the locking? Each time a lock needed to be set/removed, a message would be sent to that thread which then would perform the locking. That should make it safe to use the sqlite* from any thread. For single threaded applications that would mean more thread switching. For multithreaded applications, however, life wouldn't only become easier, but performance may well increase. In my applications I have a dedicated database access thread. Each time a query needs to be executed, it's sent to this thread. In this case I have thread switching anyway. Only having it when a lock is set/removed will decrease the amount of switches. Best regards Rolf Schäuble
[sqlite] rowId continuity?
Hi, I have a screen which displays rows of my table in a paged form (i.e. one can do Back/Previous on them and navigate across pages). I use the rowId of the top and bottom rows to query for the next/previous set of rows in the page. However I just read somewhere that the rowId need not be contigous and this would imply that my paging mechanism is all wrong. The rowId in my table is a bigint primary key. Any suggestions? Thanks!
[sqlite] Re: rowId continuity?
R S wrote: I have a screen which displays rows of my table in a paged form (i.e. one can do Back/Previous on them and navigate across pages). I use the rowId of the top and bottom rows to query for the next/previous set of rows in the page. However I just read somewhere that the rowId need not be contigous and this would imply that my paging mechanism is all wrong. rowId cannot possibly be contiguous under all circumstances. Imagine that you inserted a hundred rows, then deleted one in the middle. That would necessarily create a hole in the contiguous range. I don't quite see why you need rowId to be contiguous to implement your paging. Can't you do something like this: select * from mytable where rowId :lastRowId order by rowId limit 100 Igor Tandetnik
Re: [sqlite] rowId continuity?
Why not use another column? Provide sorted data for your users in some way that provides them with more value. On 9/14/05, R S [EMAIL PROTECTED] wrote: Hi, I have a screen which displays rows of my table in a paged form (i.e. one can do Back/Previous on them and navigate across pages). I use the rowId of the top and bottom rows to query for the next/previous set of rows in the page. However I just read somewhere that the rowId need not be contigous and this would imply that my paging mechanism is all wrong.
[sqlite] Porting...
I'm on my second day of porting SQLite to an embedded environment. I'm not sure if this is the appropriate place to put these comments about the experience. I've seen some type of ticket mechanism in CVS but I don't know if these comments raise to the level of a bug, etc. Anyway, three things: 1. There are many instances where (char *) to(unsigned char *) implied casts are done (and vice versa). These should really be fixed so fewer compiler warnings are generated. Perhaps I'm the only one who likes my builds to have no warnings... 2. There is an implied availability of ctype, stdargs, stdio, etc. All of these #includes and references to the routines used within them should be moved to the OS file. It is not always the case that these routines are available and in fact, are not available on my platform. 3. I turned on almost all of the OMIT definitions since size is very important on my embedded platform. This causes a few problems with definitions in parse.c. This is documented somewhere and I expected it. It shouldn't be necessary to hand-code out the parse.c code for triggers, etc. though. It would be much easier if, based on the various OMIT definitions, stub routines would be defined that force a parse error, etc. The porting effort has now turned to a testing effort... __ Yahoo! Mail - PC Magazine Editors' Choice 2005 http://mail.yahoo.com
Re: [sqlite] Porting...
On 9/14/05, Jay Siegel [EMAIL PROTECTED] wrote: The porting effort has now turned to a testing effort... Don't forget there are a lot of regression test cases provided with the source. --- The Castles of Dereth Calendar: a tour of the art and architecture of Asheron's Call http://www.lulu.com/content/77264
Re: [sqlite] Re: Thread safety guarantees
Rolf Schaeuble wrote: Hello, if I understand the whole issue correctly, there is only one reason why it's not safe to use a sqlite handle in any thread of a process (as long as only one thread at a time uses it): File locks are tied to the thread that created them (except this special Redhat version). That's why on Windows everything's safe, since file locks are per-process there. If that's correct, couldn't this problem be solved by letting sqlite use a dedicated thread to do the locking? Each time a lock needed to be set/removed, a message would be sent to that thread which then would perform the locking. That should make it safe to use the sqlite* from any thread. For single threaded applications that would mean more thread switching. For multithreaded applications, however, life wouldn't only become easier, but performance may well increase. In my applications I have a dedicated database access thread. Each time a query needs to be executed, it's sent to this thread. In this case I have thread switching anyway. Only having it when a lock is set/removed will decrease the amount of switches. Best regards Rolf Schäuble I disagree with the entire approach. It seems hackish. Just have each thread create it's own sqlite handle. So long as one thread does not need to pass off an exisiting handle to a different thread you will be fine.
Re: [sqlite] Porting...
On Wed, 2005-09-14 at 11:55 -0700, Jay Siegel wrote: I turned on almost all of the OMIT definitions since size is very important on my embedded platform. This causes a few problems with definitions in parse.c. This is documented somewhere and I expected it. It shouldn't be necessary to hand-code out the parse.c code for triggers, etc. though. It would be much easier if, based on the various OMIT definitions, stub routines would be defined that force a parse error, etc. Pass the appropriate -D options into lemon when you are generating parse.c and no hand editing is required. -- D. Richard Hipp [EMAIL PROTECTED]
Re: [sqlite] Re: rowId continuity?
I don't quite see why you need rowId to be contiguous to implement your paging. Can't you do something like this: select * from mytable where rowId :lastRowId order by rowId limit 100 OK I do it this way, but my concern was are these rows refilled later? Also what happens if the rowid exceeds U32?
Re: [sqlite] Re: rowId continuity?
--- R S [EMAIL PROTECTED] wrote: I don't quite see why you need rowId to be contiguous to implement your paging. Can't you do something like this: select * from mytable where rowId :lastRowId order by rowId limit 100 OK I do it this way, but my concern was are these rows refilled later? Also what happens if the rowid exceeds U32? Wouldn't select * from mytable order by rowid limit 100 offset pgCount*100; be simpler than managing/remembering the start/last rowid for each page? FWIW, I think I read here that SQLite3 stores rowids as a 64-bit integer. -Clark
[sqlite] Re: Re: rowId continuity?
R S [EMAIL PROTECTED] wrote: I don't quite see why you need rowId to be contiguous to implement your paging. Can't you do something like this: select * from mytable where rowId :lastRowId order by rowId limit 100 OK I do it this way, but my concern was are these rows refilled later? Also what happens if the rowid exceeds U32? It is rather difficult to force SQLite into a situation where it assigns non-increasing ids - essentially, you have to either overflow 64-bit counter, or explicitly create a row with the id of 2^64-1. If you don't want to take any chances, declare the column with autoincrement keyword. For more details, see http://www.sqlite.org/autoinc.html Igor Tandetnik
Re: [sqlite] Re: Thread safety guarantees
Dennis Jenkins wrote: Rolf Schaeuble wrote: Hello, if I understand the whole issue correctly, there is only one reason why it's not safe to use a sqlite handle in any thread of a process (as long as only one thread at a time uses it): File locks are tied to the thread that created them (except this special Redhat version). That's why on Windows everything's safe, since file locks are per-process there. If that's correct, couldn't this problem be solved by letting sqlite use a dedicated thread to do the locking? Each time a lock needed to be set/removed, a message would be sent to that thread which then would perform the locking. That should make it safe to use the sqlite* from any thread. For single threaded applications that would mean more thread switching. For multithreaded applications, however, life wouldn't only become easier, but performance may well increase. In my applications I have a dedicated database access thread. Each time a query needs to be executed, it's sent to this thread. In this case I have thread switching anyway. Only having it when a lock is set/removed will decrease the amount of switches. Best regards Rolf Schäuble I disagree with the entire approach. It seems hackish. Just have each thread create it's own sqlite handle. So long as one thread does not need to pass off an exisiting handle to a different thread you will be fine. That doesn't work for me. During one single transaction several threads need to insert data into the db; due to this transaction, all threads have to share a single database handle. I find it much more hackish that I have to implement a dedicated thread that collects the data and writes it to the db. It's more code (thus more bugs), and it causes more thread switching, so it's less efficient.
Re: [sqlite] need to write during a long read
On Thu, 2005-09-08 at 14:36 -0400, D. Richard Hipp wrote: On Thu, 2005-09-08 at 10:48 -0400, Mark Drago wrote: However, it seems that for every rollback that I do there is a file left in the directory with the databases. I have 30-something files named like the following: 'ame_log.db-mj0E2E1262'. ame_log.db is the filename of the main log database. The contents of the file are the full path to the main log database's journal and the full path to the attached queue database's journal. Should something be getting rid of these files? Has anyone else seen this? Those are the master journal files. They are used to make sure that commits to multiple databases occurs atomically. They should be deleted automatically. I do not know why they are not being removed for you. I will look into it. I don't know if this is going to help tracking down this issue or not, but it seems to take nearly a half a second for the insert into queue; begin transaction; move from queue to main log; delete from queue; commit transaction process to complete. Could this have anything to do with how the databases are being synced to disk? I'm trying to think of anything that I'm doing that could be weird or out of the ordinary and running 'pragma synchronous=off' on the two attached databases is really the only thing I can think of. Any other ideas or things I should look in to? Mark. signature.asc Description: This is a digitally signed message part
Re: [sqlite] Re: Thread safety guarantees
- Original Message - From: Rolf Schaeuble [EMAIL PROTECTED] To: sqlite-users@sqlite.org Sent: Wednesday, September 14, 2005 2:43 PM Subject: Re: [sqlite] Re: Thread safety guarantees That doesn't work for me. During one single transaction several threads need to insert data into the db; due to this transaction, all threads have to share a single database handle. I find it much more hackish that I have to implement a dedicated thread that collects the data and writes it to the db. It's more code (thus more bugs), and it causes more thread switching, so it's less efficient. You must already be doing some sort of synchronization here, though. Afterall, you can't have all those threads all issuing SQL commands against the same connection pointer at the same time without some sort of mutex. Robert
Re: [sqlite] Re: Thread safety guarantees
Wed, Sep 14, 2005 at 11:43:43PM +0200: Rolf Schaeuble wrote: Dennis Jenkins wrote: Rolf Schaeuble wrote: Hello, if I understand the whole issue correctly, there is only one reason why it's not safe to use a sqlite handle in any thread of a process (as long as only one thread at a time uses it): File locks are tied to the thread that created them (except this special Redhat version). That's why on Windows everything's safe, since file locks are per-process there. If that's correct, couldn't this problem be solved by letting sqlite use a dedicated thread to do the locking? Each time a lock needed to be set/removed, a message would be sent to that thread which then would perform the locking. That should make it safe to use the sqlite* from any thread. For single threaded applications that would mean more thread switching. For multithreaded applications, however, life wouldn't only become easier, but performance may well increase. In my applications I have a dedicated database access thread. Each time a query needs to be executed, it's sent to this thread. In this case I have thread switching anyway. Only having it when a lock is set/removed will decrease the amount of switches. Best regards Rolf Schäuble I disagree with the entire approach. It seems hackish. Just have each thread create it's own sqlite handle. So long as one thread does not need to pass off an exisiting handle to a different thread you will be fine. That doesn't work for me. During one single transaction several threads need to insert data into the db; due to this transaction, all threads have to share a single database handle. No. Each thread needs to open its own db handle and let SQLite make the lock job. That's all. Since you don't need a db handle opened by thread(a) to be used by another thread (b), you won't have any problem... If you need to, you will need my lost patch :) http://www.sqlite.org/cvstrac/tktview?tn=1417 -- Guillaume FOUGNIES