Re: [sqlite] Shared Cache for Processes
Hi, Speaking only as a non-professional, I still try to answer. I don't want to comment on the benefits of shared cache mode now, but only on the question if it can be enabled in Apache. And I believe it can. As you say Apache pre-forks different processes, but within each process it pre-establishes different threads and I think the default is in the order of 100. So practically you will only use a single process and hence can easily enable shared-cache mode. This assumes that your application is loaded into Apache, and does not run via cgi. Regards, Edzard Pasma Op 24-jan-2008, om 16:48 heeft Brandon, Nicholas (UK) het volgende geschreven: Hi all, Could the 'Shared Cache' option in SQLite theoretically improve the performance of the db if used by multiple processes? The application in particular is Apache using pre-fork processes accessing the same db. The info at http://www.sqlite.org/sharedcache.html seems to indicate it could benefit threads only. I believe it would not but would like confirmation from someone else. Thanks Nick This email and any attachments are confidential to the intended recipient and may also be privileged. If you are not the intended recipient please delete it from your system and notify the sender. You should not copy it or use it for any purpose nor disclose or distribute its contents to any other person. -- --- To unsubscribe, send email to [EMAIL PROTECTED] -- --- - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Sqlite File Access Restriction
1) Sqlite database file access restriction: Is there a built-in or preferred method to block other processes from writing or even accessing a database file when its opened first by another process. I can get this to work by having the first process to open the file issue a BEGIN Exclusion or BEGIN Restricted but there is always a chance, that right after I commit and go to issue another BEGIN, a 2^nd progress can grab and hold the file. I would like for the 2^nd process to see if another process has control of the file and warn/adapt based on that. Definitely NO multiple writers. That is Absolutely the only way get that kind of 'useful' lock on windows. End of story! There is a pragma locking_mode that may still be considered: http:// sqlite.org/pragma.html#pragma_locking_mode - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] shared cache mode locking
The ticket has already been resolved, I see. So it has been considered a bug. In my earlier reply I tried to defend the current behavour to be in line with the document, http://sqlite.org/ sharedcache.html. I'm happy to change my mind now. Only I miss something in the model as described in the document. This may either be: - exclusive transactions as a new kind of transactions, apart form read- and write-transactions or - database-level locking as a new level above transaction-level locking. May be this suggestion is too naive, anyway it helps me explain the wonderful cache sharing. Ken wrote: Ed, Dan opened a ticket. I agree the documentation isn't clear on the Exlusive locking state. Not really sure, if this is by design or a bug at this stage. I do think its a great feature of the Shared cache mode to allow table level locking. But I'm curious with this table level locking what would happen if two threads performed writes to two seperate tables concurrently using only a begin immediate. Thread a writes to tab1, Thread b writes to tab2, (Is this allowed ? or is a sqlite_locked kicked returned?) If it is allowed then would there be two journal files concurrently existing? And What happens during a crash with two journals ? This gets complicated very quickly. Ken Ed Pasma [EMAIL PROTECTED] wrote: Hello,` Empirically I found that it is exactly true. Must admit I'm confused but may it is in line with the Shared-Cache locking model. This does not mention the EXCLUSIVE locking state. The most 'secure' locking state it mentions is a write-transaction and this can coexist with read-transactions from others. Thus begin exclusive starts a write-transaction and the on-going read does not interfere. The error message seems to clarify the situation further: database table is locked. Thus the collision occurs at the table-level. And yes, taking different tables for read and write, it does not occur. Practically this may not help very much. But may be the following does in case you have a busy_timeout setting. When having Shared-Cache mode enabled, the timeout setting appears to be ignored by SQLite. This makes locking situations surface rather soon, also when there is no dead-lock. The situation may be handled by a programmatic retry? Regards, Ed Op 19-dec-2007, om 19:12 heeft Ken het volgende geschreven: Some additional info: when the sqlite_lock is returned there is another thread that appears to be reading the same table. Does the sqlite3 step return sqlite_locked in this case? Thanks, Ken Ken wrote: While using the new 3.5.4 sqlite3_enable_shared_cache I ran into a strange lock situation. SQLITE_LOCK is returned from an insert statement, even though the thread/connection performed a successful begin exclusive transaction. begin exclusive insert into table... --- returns SQLITE_LOCKED Is it possible for both connections to begin exclusive transactions whilst having the shared cache anabled? Thanks, ken -- --- To unsubscribe, send email to [EMAIL PROTECTED] -- --- - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] shared cache mode locking
No, you did not confuse me. We are talking about different things it appears. My post refers to the shared-cache locking model (http:// sqlite.org/sharedcache.html). The document is clear by itself. What makes it confusing, is that a shared cache instance exist as a single normal connection which may coincide with other, normal database connections. Quite a generous feature. But it means that the two locking models do apply at the same time. The joint connections within a shared cache are subject to the regular locking model in relation to possible other connections to the same database. Confusing or not? Ken wrote: Ed, Sorry if I confused you, a Write lock is really an EXCLUSIVE lock per sqlite documentation. I used the two interchangeably, pardon my error. A begin exclusive indicates the beginning of a transaction, It escalates the database lock to an EXCLUSIVE lock. The begin transaction does not immediately do this, rather it waits until the buffer cache spills to disk. At this point it attempts to escalate the Reserved lock to a Pending then an Exclusive lock. There is only 1 type of EXCLUSIVE (write) lock, It is database wide and is all or nothing. Once you have the lock, it prevents other access to the DB. Ken Ed Pasma [EMAIL PROTECTED] wrote: The ticket has already been resolved, I see. So it has been considered a bug. In my earlier reply I tried to defend the current behavour to be in line with the document, http://sqlite.org/ sharedcache.html. I'm happy to change my mind now. Only I miss something in the model as described in the document. This may either be: - exclusive transactions as a new kind of transactions, apart form read- and write-transactions or - database-level locking as a new level above transaction-level locking. May be this suggestion is too naive, anyway it helps me explain the wonderful cache sharing. Ken wrote: Ed, Dan opened a ticket. I agree the documentation isn't clear on the Exlusive locking state. Not really sure, if this is by design or a bug at this stage. I do think its a great feature of the Shared cache mode to allow table level locking. But I'm curious with this table level locking what would happen if two threads performed writes to two seperate tables concurrently using only a begin immediate. Thread a writes to tab1, Thread b writes to tab2, (Is this allowed ? or is a sqlite_locked kicked returned?) If it is allowed then would there be two journal files concurrently existing? And What happens during a crash with two journals ? This gets complicated very quickly. Ken Ed Pasma wrote: Hello,` Empirically I found that it is exactly true. Must admit I'm confused but may it is in line with the Shared-Cache locking model. This does not mention the EXCLUSIVE locking state. The most 'secure' locking state it mentions is a write-transaction and this can coexist with read-transactions from others. Thus begin exclusive starts a write-transaction and the on-going read does not interfere. The error message seems to clarify the situation further: database table is locked. Thus the collision occurs at the table-level. And yes, taking different tables for read and write, it does not occur. Practically this may not help very much. But may be the following does in case you have a busy_timeout setting. When having Shared-Cache mode enabled, the timeout setting appears to be ignored by SQLite. This makes locking situations surface rather soon, also when there is no dead-lock. The situation may be handled by a programmatic retry? Regards, Ed Op 19-dec-2007, om 19:12 heeft Ken het volgende geschreven: Some additional info: when the sqlite_lock is returned there is another thread that appears to be reading the same table. Does the sqlite3 step return sqlite_locked in this case? Thanks, Ken Ken wrote: While using the new 3.5.4 sqlite3_enable_shared_cache I ran into a strange lock situation. SQLITE_LOCK is returned from an insert statement, even though the thread/connection performed a successful begin exclusive transaction. begin exclusive insert into table... --- returns SQLITE_LOCKED Is it possible for both connections to begin exclusive transactions whilst having the shared cache anabled? Thanks, ken - - --- To unsubscribe, send email to [EMAIL PROTECTED] - - --- -- --- To unsubscribe, send email to [EMAIL PROTECTED] -- --- - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] shared cache mode locking
Hello,` Empirically I found that it is exactly true. Must admit I'm confused but may it is in line with the Shared-Cache locking model. This does not mention the EXCLUSIVE locking state. The most 'secure' locking state it mentions is a write-transaction and this can coexist with read-transactions from others. Thus begin exclusive starts a write-transaction and the on-going read does not interfere. The error message seems to clarify the situation further: database table is locked. Thus the collision occurs at the table-level. And yes, taking different tables for read and write, it does not occur. Practically this may not help very much. But may be the following does in case you have a busy_timeout setting. When having Shared-Cache mode enabled, the timeout setting appears to be ignored by SQLite. This makes locking situations surface rather soon, also when there is no dead-lock. The situation may be handled by a programmatic retry? Regards, Ed Op 19-dec-2007, om 19:12 heeft Ken het volgende geschreven: Some additional info: when the sqlite_lock is returned there is another thread that appears to be reading the same table. Does the sqlite3 step return sqlite_locked in this case? Thanks, Ken Ken [EMAIL PROTECTED] wrote: While using the new 3.5.4 sqlite3_enable_shared_cache I ran into a strange lock situation. SQLITE_LOCK is returned from an insert statement, even though the thread/connection performed a successful begin exclusive transaction. begin exclusive insert into table... --- returns SQLITE_LOCKED Is it possible for both connections to begin exclusive transactions whilst having the shared cache anabled? Thanks, ken - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Huge performance drop when using prepared statement
Hello, think I got it, but it is disappointingly simple, see below. Ed. Markus Gritsch wrote: Even more strange: c.execute(SELECT * FROM entry, word, word_entry WHERE entry.id = word_entry.entry_id AND word.id = word_entry.word_id AND word.word GLOB ? , ('tes*',)) takes less than 1ms but c.execute(SELECT * FROM entry, word, word_entry WHERE entry.id = word_entry.entry_id AND word.id = word_entry.word_id AND word.word GLOB ? , ('test',)) takes several hundred ms. The execute in Python includes prepare (or get from cache), bind and the first step. The answer must be that the wait time lies in the first step. The engine is doing a full scan and it all depends how far in the table it needs to go to find the first match. So the bind values with * just come across a match sooner. Wilhelm Braun wrote: I just tried for fun: start = time.time() SQLString=('''SELECT * FROM entry, word, word_entry WHERE entry.id = word_entry.entry_id AND word.id = word_entry.word_id AND word.word GLOB '%s' ''' % hui*) c.execute(SQLString) and it is as fast as your first one - seems a pysqlite problem to me I know they say this is not a secure way to do it -- well. This seems the only solution after all. But it floods the wonderful pysqlite statement cache, with new SQL statements for each new bind value. Preferably, only the operator is substituted in the SQL, for GLOB or just =, depending on the actual bind value. That leaves just two different statements. But I don't know if the result is the same as I don't know GLOB very well. If it is affected by the case_sensitive_like pragma my idea is too simple. - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Trigger update of multiple columns
Hi, I tried to update a list of columns: UPDATE t SET (c1, c2, c3) = (SELECT c1, c2, c3) FROM t2 WHERE .. but this syntax is not accepted as you probably already know. I may promote [INSERT OR] REPLACE then. It is syntactically described in the SQLite documentation but for the semantics you may see the original MySQL doc. http://dev.mysql.com/doc/refman/5.0/en/replace.html It is the only way that I see to do the update with only a single scan of the product table. But may be REPLACE causes troubles in combination with triggers. Because indirectly it performs a DELETE and a new INSERT. Other suggestions should be welcome. Regards, Ed Op 17-jun-2007, om 10:00 heeft TB het volgende geschreven: Hi All, I have a pretty standard sales tracking database consisting of tables: Products - Each row is a product available for sale. Includes fields: Code, Buy, Sell, Description Sales - Each row is a sale made to a customer. Includes fields: Ref, Customer Sale_Products - Each row is an product (many) included in a sale (one). Includes fields: Sale_Ref, Code, Buy, Sell, Description Now, when I add a new Sale_Products row and assign a product Code to it, I want to trigger it to auto enter the Buy and Sell prices, and the description, by looking up the related Product (ie where Sale_Products.Code = Products.Code) How can I do this? I have something like this: create trigger Update_Sale_Products_Code after update of Code on Sale_Products begin update Sale_Products set Buy = (select Buy from Products where Products.Code = new.Code) , Sell = (select Sell from Products where Products.Code = new.Code) , Description = (select Description from Products where Products.Code = new.Code) where rowid=new.rowid ; end It works, but it's unnecessarily slow, since it takes a while to look up the huge Products table (which is actually a UNION ALL of various supplier catalogs), and it's looking it up for each updating field (and I have more fields to lookup than shown in this example). It would be more efficient to look it up once to find the corresponding product (according to Products.Code = new.Code), but I'm stumped as to how to do that. I tried: create trigger Update_Sale_Products_Code after update of Code on Sale_Products begin update Sale_Products set Buy = (select Buy from Products) , Sell = (select Sell from Products) , Description = (select Description from Products) where rowid=new.rowid and Products.Code = new.Code ; end But that fails, and seems a bit ambiguous anyway. It seems to need some kind of JOIN, but I can't see provision for it in the UPDATE syntax. There must be a much simpler way that I'm overlooking. Please enlighten me. Thanks, Tom -- --- To unsubscribe, send email to [EMAIL PROTECTED] -- --- - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: Re[2]: [sqlite] Can't update table from itself or multiple tables
Hi, Trey. I checked not implemented features list of sqlite and found nothing about update ... from. There's no FROM on http://www.sqlite.org/lang_update.html, so that's a hint that it's not supported. All the same, maybe this should be added to http://www.sqlite.org/omitted.html update table1 set val = ss.v from (select t2.some as v, t1.id as id from table1 t1, table2 t2 where t1.id = t2.nid) as ss where ss.id = table1.id How about update table1 set val = (select some from table2 where table1.id = table2.nid ); - TMack It works. Thanx. Sometimes this approach it's too slow, sometimes it doesn't works but I can go ahead now :). Hello, I wish to propose another statement, though it does not look so attractive. But sure it is fast: insert or replace into table1 (rowid, id, val2, val) select t1.rowid, t1.id, t1.val2, t2.some from table1 t1 left outer join table2 t2 on t2.id = t1.nid; This is equivalent to the update statement, assuming: 1. table1 has no primary key (so rowid is used) 2. table1 has no further columns as the ones mentioned (id, val and val2) The outer join is added fot complete equivalence. It arranges that val gets assigned null where no matching row is found in table2. If that is not desirable, this can be omitted to leave singular rows unchanged. Hope this is useful, Ed Pasma - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] One more SQLite threading question
However, it would be too time consuming to serialize every call to sqlite3_step(), so I wonder whether it can be called in another thread. This almost immediately raises library routine called out of sequence. It occurs as soon as the processing of A and B overlap, that means A is preparing statement #2 while B is still executing #1. Have you tried using Mutex or some other way to prevevent really simultaneous calling of SQLite methods? My guess that it should work well then, but I haven't tried it myself... Yes I did the same experiment with a lock that made thread A wait until B was finished. So actually only one thread can be active at the time. I don't see how the outcome of this experiment can be of any interest, as there is no time reduction any longer. But your guess is right that, it works. - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] One more SQLite threading question
Hello, I have no inside-knowledge from SQLite, but I'am in the circumstance to easily do this experiment. Hope I understand it right and that you consider a sort of pipe-lining. Anyway, I started the two threads A and B, and made A exclusively do all the sqlite3_prepare calls, and B the rest, including sqlite3_step. This almost immediately raises library routine called out of sequence. It occurs as soon as the processing of A and B overlap, that means A is preparing statement #2 while B is still executing #1. So the experimental conclusion is that this won't work. But this applies only to the pipelining idea. The serializing to use a single connection may still offer an interesting new locking model. Regards, Ed Pasma Op 14-mei-2007, om 13:04 heeft Jiri Hajek het volgende geschreven: Hello, I have tried to search all the documentation about threading in SQLite, but I'm still somewhat confused. It's often suggested to create a pool of sqlite3 structures, but what if I would like to have only only sqlite3 connection and serialize all the DB operations to one thread (name it 'A') that would prepare and execute all the queries. I guess that this would work well... However, it would be too time consuming to serialize every call to sqlite3_step(), so I wonder whether it can be called in another thread. So my scenario is: 1. Thread B wants to open a query 'SELECT * FROM Tbl1' 2. Thread A is used to prepare the query 'SELECT * FROM Tbl1' using sqlite3_prepare16_v2() and using the same sqlite3 DB connection as for possibly some other running SQL statements. 3. Thread B now repeatedly calls sqlite3_step(), sqlite3_column_text16() and similar functions in order to get all rows from DB. 4. Thread A is used to call sqlite3_finalize() on the openned query. So my questions are: a. Would the code described above work. b. In step 3., do I have to somehow make sure that calls to sqlite3_step() don't interfere with other SQLite processing in thread A, e.g. by Windows CriticalSections? Is anything like this also needed for sqlite3_column_text16()? Thanks for any explanation, Jiri - To unsubscribe, send email to [EMAIL PROTECTED] -