[sqlite] Using Indexing in Joins Method
Hi, I am having 4 records and My table looks like , "CREATE TABLE ALBUMARTIST(AlbumArtistId INTEGER PRIMARY KEY NOT NULL,AlbumArtistName TEXT NOT NULL COLLATE NOCASE DEFAULT 'Unknown', UNIQUE(AlbumArtistName));" "CREATE TABLE ARTIST(ArtistId INTEGER PRIMARY KEY NOT NULL,ArtistName TEXT NOT NULL COLLATE NOCASE DEFAULT 'Unknown',UNIQUE(ArtistName));" "CREATE TABLE BGM(BgmId INTEGER PRIMARY KEY NOT NULL,BgmName TEXT NOT NULL COLLATE NOCASE DEFAULT 'Unknown',UNIQUE(BgmName));" "CREATE TABLE MUSIC(Id INTEGER PRIMARY KEY NOT NULL,Album TEXT NOT NULL COLLATE NOCASE DEFAULT 'Unknown',Track TEXT NOT NULL,URL TEXT NOT NULL,Artist_Id INTEGER,AlbumArtist_Id INTEGER,Bgm_Id INTEGER);" Where Artist_Id , AlbumArtist_Id , Bgm_Id are the type ids of table ALBUMARTIST,ARTIST,BGM,MUSIC . I will search for the following 1. SELECT * FROM ARTIST ORDER BY ArtistName; 2. SELECT * FROM ALBUMARTIST ORDER BY AlbumArtistName; 3. SELECT Track,URL FROM MUSIC ORDER BY Track ; 4. SELECT BgmId,BgmName FROM BGM ; 5. SELECT DISTINCT Album FROM MUSIC WHERE Artist_Id = ? ORDER BY Album ; 6. SELECT Track,URL FROM MUSIC WHERE Artist_Id = %s ORDER BY Track ; 7. SELECT Track,URL FROM MUSIC WHERE Artist_Id = ? and Album = ? ORDER BY Track; 8. SELECT Track,URL FROM MUSIC WHERE AlbumArtist_Id = '%s' ORDER BY Track; 9. SELECT Track,URL FROM MUSIC WHERE Bgm_Id = '%s' GROUP BY Track ; To achieve better performance do I want to index the tables ALBUMARTIST,ARTIST,BGM or its not needed. Will the performance increase by doing indexing the Artist_Id , Album in MUSIC table. Kindly suggests some ways. Thanks & Regards, Sreedhar.A - To unsubscribe, send email to [EMAIL PROTECTED] -
RE: [sqlite]:Using sqlite3_progress_handler for GUI application
Hi, Thankyou very much for the suggestions. Best Regards, A.Sreedhar. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 19, 2007 12:41 AM To: sqlite-users@sqlite.org Subject: Re: [sqlite]:Using sqlite3_progress_handler for GUI application Zbigniew Baniewski <[EMAIL PROTECTED]> wrote: > On Tue, Dec 18, 2007 at 12:24:25PM +, Simon Davies wrote: > > > Prob shd be http://www.sqlite.org/cvstrac/wiki?p=ScrollingCursor > > One question - using the example mentioned there: > > If we've created an index: CREATE INDEX example1 ON tracks(singer, > title); > > So, it'll make the query, like below, much faster: > > SELECT title FROM tracks > WHERE singer='Madonna' >AND title<:firsttitle > ORDER BY title DESC > LIMIT 5; > > but I understand, that when I'll try to add in the query a field > not covered by index "example1", like this: > > SELECT title FROM tracks > WHERE singer='Madonna' >AND title<:firsttitle >AND year_ed > 1985; > > so, then I'm losing every profit from having "example1" index, > right? Or perhaps "not quite every", and some speedup still remains - > just because _some_ columns are covered by example1 anyway? The index is still used to speed the search. But the extra "AND year_ed>1985" term requires SQLite to check each row coming out of the index an discard those for which the condition is not true. This might be a small or a large loss in performance, depending on how many rows match the condition. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] aces a databases store into a web server not into my local PC
Ok, thanks. So, must upload my SQLITE onto my web server. Want to use Php - because my host server support Php. Where can I learn about how must be config.php. I don't know if must me configured like in SQL. I understood that Sqlite works in same maniere like SQL- when want a query from him. Exist a help or tutorial? How about rights acces it? It's a important point for a database. Thanks, Again -- View this message in context: http://www.nabble.com/aces-a-databases-store-into-a-web-server-not-into-my-local-PC-tp14462350p14466588.html Sent from the SQLite mailing list archive at Nabble.com. - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] PATCH: WHERE clause OR to UNION optimization
The attached patch implements the WHERE clause "OR to UNION" optimization as described in this post: http://www.mail-archive.com/sqlite-users@sqlite.org/msg09004.html If the computed cost of the rewritten WHERE clause is lower than the original query when indexes are taken into account, then it will perform the optimization. If the cost is estimated to be higher then the query will not be rewritten. Given the database formed by running these statements: create table stuff(a,b,c,d); insert into stuff values(1,2,3,4); create temp view v1 as select random()%100, random()%100, random()%1000, random()%1 from stuff x, stuff y; insert into stuff select * from v1; insert into stuff select * from v1; insert into stuff select * from v1; insert into stuff select * from v1; insert into stuff select * from v1; create index stuff_b on stuff(b); create index stuff_c on stuff(c); create index stuff_d on stuff(d); analyze; The patched version of sqlite 3.5.4 will run the following query many times faster than an unpatched sqlite 3.5.4: select b, count(*) from stuff where c=2 or b=23 or c=17 or c=493 or d=7 or c=111 and a=14 group by 1 order by 2 DESC, 1 limit 10; On my machine, the patched version produces these query timings: CPU Time: user 0.724045 sys 0.092005 with the EXPLAIN QUERY PLAN: 0|0|TABLE stuff USING PRIMARY KEY 0|0|TABLE stuff WITH INDEX stuff_c 0|0|TABLE stuff WITH INDEX stuff_d 0|0|TABLE stuff WITH INDEX stuff_b 0|0|TABLE stuff WITH INDEX stuff_c For the same query the unpatched sqlite 3.5.4 produces: CPU Time: user 20.869304 sys 8.912557 0|0|TABLE stuff WITH INDEX stuff_b ORDER BY Only single table queries are supported by this OR optimization. For this optimization to be considered, the WHERE clause may only consist of column equality comparisons to constants, ORs and ANDs. The optimization only looks at the top-level WHERE clause ORs. It will not work with "IN" expressions. Nor will it will not expand expressions like "a=1 AND (b=2 or c=3)" into "a=1 AND b=2 OR a=1 AND c=3" - although if manually expanded, the latter form could potentially be optimized. It passes "make test" without regressions, but more testing is needed. Never miss a thing. Make Yahoo your home page. http://www.yahoo.com/r/hsIndex: src/select.c === RCS file: /sqlite/sqlite/src/select.c,v retrieving revision 1.372 diff -u -3 -p -r1.372 select.c --- src/select.c14 Dec 2007 17:24:40 - 1.372 +++ src/select.c22 Dec 2007 02:39:00 - @@ -12,7 +12,7 @@ ** This file contains C code routines that are called by the parser ** to handle SELECT statements in SQLite. ** -** $Id: select.c,v 1.372 2007/12/14 17:24:40 drh Exp $ +** $Id: select.c,v 1.370 2007/12/13 21:54:11 drh Exp $ */ #include "sqliteInt.h" @@ -2961,6 +2961,440 @@ static void updateAccumulator(Parse *pPa pAggInfo->directMode = 0; } +#ifndef SQLITE_OMIT_OR_UNION_TRANSFORM + +/* The function prefix "o2u" stands for "OR to UNION TRANSFORM" */ +static double o2uAndCost(Expr *p, int iTable, Bitmask *bm); +static double o2uEqCost(Expr *p, int iTable, Bitmask *bm); + +/* +** Count the number of bits in Bitmask. Each bit represents the existance +** of a column in an expression. The zeroth bit represents the use of the +** rowid column in the WHERE clause, which is different from non-o2u +** uses of Bitmask in the code. +*/ +static int o2uBitCount(Bitmask x){ + int n = 0; + while( x ){ +x &= x-1; +++n; + } + return n; +} + +/* +** Full table scan cost is simply the average number rows for each index +** for the specified table. +*/ +static double o2uFullTableScanCost(Table* pTab){ + if( pTab ){ +double sum = 0; +int n = 0; +Index *pIndex; +for( pIndex = pTab->pIndex; pIndex; pIndex = pIndex->pNext, n++ ){ + if( pIndex->nColumn>0 ){ +sum += pIndex->aiRowEst[0]; + } +} +if( n && sum>n ){ + return sum/n; +} + } + return 100; +} + +/* +** Estimate a WHERE clause column's cost taking indexes into account. +** Record any columns encountered in Bitmask. +*/ +static double o2uColumnCost(Expr *p, int iTable, Bitmask *bm){ + if( p && p->op==TK_COLUMN && p->pSelect==0 && p->iTable==iTable && p->pTab ){ +int iColumn = p->iColumn; +Index* pIndex; +if( iColumn>=-1 && iColumn<=(int)(sizeof(Bitmask)*8-2) ){ + *bm |= 1<<(iColumn+1); /* rowid column -1 is the 0th bit */ +}else{ + *bm |= 0x3; /* iTable beyond range: disqualify single column OR opt */ +} +if( iColumn==-1 ){ + return 10; /* Match: rowid */ +} +for( pIndex = p->pTab->pIndex; pIndex; pIndex = pIndex->pNext ){ + if( pIndex->nColumn>0 && pIndex->aiColumn[0]==iColumn ){ +return pIndex->aiRowEst[1]; /* Match
[sqlite] Trac Account
Hey all, I was wondering what it takes to an account on Trac. I'm basically the maintainer of the Mozilla Project's SQLite wrapper, and I'd find things to be a bit clearer if bug reports/comments made by me were in fact labeled as such. In addition, I believe that I can get e-mail notifications of changes to tickets, which is a heck of a lot better than bookmarking a ticket and checking back every few days. Is there some policy setup for this that I just haven't found? Cheers, Shawn Wilsher - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Fetch all result set to memory
"Ofir Neuman" <[EMAIL PROTECTED]> wrote: > Hi, > > I'm using SQLite in a multithread application and I would like to fetch every > select statement into the memory. > > I can't use sqlite3_get_table since it doesn't support Unicode as far as I > know. > > Is there a better way to fetch all the result to memory other than fetching > it one by one and build a custom matrix? > Sqlite3_get_table() does support unicode. And it works by fetching the results one by one and building a custom matrix. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Fetch all result set to memory
Whichever way you do it Sqlite fetches the rows one by one. Ofir Neuman wrote: Hi, I'm using SQLite in a multithread application and I would like to fetch every select statement into the memory. I can't use sqlite3_get_table since it doesn't support Unicode as far as I know. Is there a better way to fetch all the result to memory other than fetching it one by one and build a custom matrix? Thanks. - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Fetch all result set to memory
Hi, I'm using SQLite in a multithread application and I would like to fetch every select statement into the memory. I can't use sqlite3_get_table since it doesn't support Unicode as far as I know. Is there a better way to fetch all the result to memory other than fetching it one by one and build a custom matrix? Thanks.
Re: [sqlite] aces a databases store into a web server not into my local PC
What programming language are you using on the server side? PHP, Pyhton, Tcl, etc...? Theses languages allready have packages to access data sotred in SQLite. Samuel On 2007-12-21 16:10, finder wrote: > Hi, > I don't find how to aces a databases store into a web server, throught a > web page. > I want to make a form that post a query and receive the result using a web > page. > > Many thanks - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] aces a databases store into a web server not into my local PC
Le vendredi 21 décembre 2007, finder a écrit : > Hi, > I don't find how to aces a databases store into a web server, > throught a web page. > I want to make a form that post a query and receive the result > using a web page. > > Many thanks You could use a php script using the PDO::SQlite interface... - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] aces a databases store into a web server not into my local PC
Hi, I don't find how to aces a databases store into a web server, throught a web page. I want to make a form that post a query and receive the result using a web page. Many thanks -- View this message in context: http://www.nabble.com/aces-a-databases-store-into-a-web-server-not-into-my-local-PC-tp14462350p14462350.html Sent from the SQLite mailing list archive at Nabble.com. - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] shared cache mode locking
So did my post. We are talking about the same thing. Definately confusing, at least to me.. The problem exists wherein you have two shared connections and one connection performs a begin exclusive... The other connection was just ignoring the exclusivity lock and continuing on its merry way and acquiring a table level lock. Which causes the first connection to get a SQLITE_LOCKED upon an insert to a table that the second connection is reading. The documentation is quite clear that once a connection acquires an EXCLUSIVE lock that it has controll and should not be locked out from writing by any other connections. The dual locking model (prior to the resolution) is ambiguous and could possibly lead application to deadlocks. These are just my thoughts on the matter, and are probably not 100% correct. Ken Ed Pasma <[EMAIL PROTECTED]> wrote: No, you did not confuse me. We are talking about different things it appears. My post refers to the shared-cache locking model (http:// sqlite.org/sharedcache.html). The document is clear by itself. What makes it confusing, is that a shared cache instance exist as a single normal connection which may coincide with other, "normal" database connections. Quite a generous feature. But it means that the two locking models do apply at the same time. The joint connections within a shared cache are subject to the regular locking model in relation to possible other connections to the same database. Confusing or not? Ken wrote: > Ed, > > Sorry if I confused you, a "Write" lock is really an EXCLUSIVE lock > per sqlite documentation. I used the two interchangeably, pardon my > error. > > A begin exclusive indicates the beginning of a transaction, It > escalates the database lock to an EXCLUSIVE lock. The begin > transaction does not immediately do this, rather it waits until the > buffer cache spills to disk. At this point it attempts to escalate > the Reserved lock to a Pending then an Exclusive lock. > > There is only 1 type of EXCLUSIVE (write) lock, It is database > wide and is all or nothing. Once you have the lock, it prevents > other access to the DB. > > Ken > > > Ed Pasma wrote: The ticket has already been > resolved, I see. So it has been > considered a bug. In my earlier reply I tried to defend the current > behavour to be in line with the document, http://sqlite.org/ > sharedcache.html. I'm happy to change my mind now. Only I miss > something in the model as described in the document. This may > either be: > - exclusive transactions as a new kind of transactions, apart form > read- and write-transactions > or > - database-level locking as a new level above transaction-level > locking. > May be this suggestion is too naive, anyway it helps me explain the > wonderful cache sharing. > > Ken wrote: > >> Ed, >> >> Dan opened a ticket. I agree the documentation isn't clear on the >> Exlusive locking state. >> >> Not really sure, if this is by design or a bug at this stage. I do >> think its a great feature of the Shared cache mode to allow table >> level locking. But I'm curious with this table level locking what >> would happen if two threads performed writes to two seperate tables >> concurrently using only a begin immediate. >> >> Thread a writes to tab1, >> Thread b writes to tab2, (Is this allowed ? or is a sqlite_locked >> kicked returned?) >> >> If it is allowed then would there be two journal files concurrently >> existing? And What happens during a crash with two journals ? >> >> This gets complicated very quickly. >> >> Ken >> >> Ed Pasma wrote: Hello,` >> Empirically I found that it is exactly true. >> Must admit I'm confused but may it is in line with the Shared-Cache >> locking model. >> This does not mention the EXCLUSIVE locking state. >> The most 'secure' locking state it mentions is a write-transaction >> and this can coexist with read-transactions from others. >> Thus "begin exclusive" starts a write-transaction and the on-going >> read does not interfere. >> The error message seems to clarify the situation further: database >> table is locked. Thus the collision occurs at the table-level. And >> yes, taking different tables for read and write, it does not occur. >> Practically this may not help very much. But may be the following >> does in case you have a busy_timeout setting. >> When having Shared-Cache mode enabled, the timeout setting appears to >> be ignored by SQLite. This makes locking situations surface rather >> soon, also when there is no dead-lock. >> The situation may be handled by a programmatic retry? >> Regards, Ed >> >> Op 19-dec-2007, om 19:12 heeft Ken het volgende geschreven: >> >>> Some additional info: >>> >>> when the sqlite_lock is returned there is another thread that >>> appears to be reading the same table. Does the sqlite3 step return >>> sqlite_locked in this case? >>> >>> Thanks, >>> Ken >>> >>> >>> Ken wrote: >>> While using the new 3.5.4 sqlite3_enable_shared_cach
Re: [sqlite] shared cache mode locking
No, you did not confuse me. We are talking about different things it appears. My post refers to the shared-cache locking model (http:// sqlite.org/sharedcache.html). The document is clear by itself. What makes it confusing, is that a shared cache instance exist as a single normal connection which may coincide with other, "normal" database connections. Quite a generous feature. But it means that the two locking models do apply at the same time. The joint connections within a shared cache are subject to the regular locking model in relation to possible other connections to the same database. Confusing or not? Ken wrote: Ed, Sorry if I confused you, a "Write" lock is really an EXCLUSIVE lock per sqlite documentation. I used the two interchangeably, pardon my error. A begin exclusive indicates the beginning of a transaction, It escalates the database lock to an EXCLUSIVE lock. The begin transaction does not immediately do this, rather it waits until the buffer cache spills to disk. At this point it attempts to escalate the Reserved lock to a Pending then an Exclusive lock. There is only 1 type of EXCLUSIVE (write) lock, It is database wide and is all or nothing. Once you have the lock, it prevents other access to the DB. Ken Ed Pasma <[EMAIL PROTECTED]> wrote: The ticket has already been resolved, I see. So it has been considered a bug. In my earlier reply I tried to defend the current behavour to be in line with the document, http://sqlite.org/ sharedcache.html. I'm happy to change my mind now. Only I miss something in the model as described in the document. This may either be: - exclusive transactions as a new kind of transactions, apart form read- and write-transactions or - database-level locking as a new level above transaction-level locking. May be this suggestion is too naive, anyway it helps me explain the wonderful cache sharing. Ken wrote: Ed, Dan opened a ticket. I agree the documentation isn't clear on the Exlusive locking state. Not really sure, if this is by design or a bug at this stage. I do think its a great feature of the Shared cache mode to allow table level locking. But I'm curious with this table level locking what would happen if two threads performed writes to two seperate tables concurrently using only a begin immediate. Thread a writes to tab1, Thread b writes to tab2, (Is this allowed ? or is a sqlite_locked kicked returned?) If it is allowed then would there be two journal files concurrently existing? And What happens during a crash with two journals ? This gets complicated very quickly. Ken Ed Pasma wrote: Hello,` Empirically I found that it is exactly true. Must admit I'm confused but may it is in line with the Shared-Cache locking model. This does not mention the EXCLUSIVE locking state. The most 'secure' locking state it mentions is a write-transaction and this can coexist with read-transactions from others. Thus "begin exclusive" starts a write-transaction and the on-going read does not interfere. The error message seems to clarify the situation further: database table is locked. Thus the collision occurs at the table-level. And yes, taking different tables for read and write, it does not occur. Practically this may not help very much. But may be the following does in case you have a busy_timeout setting. When having Shared-Cache mode enabled, the timeout setting appears to be ignored by SQLite. This makes locking situations surface rather soon, also when there is no dead-lock. The situation may be handled by a programmatic retry? Regards, Ed Op 19-dec-2007, om 19:12 heeft Ken het volgende geschreven: Some additional info: when the sqlite_lock is returned there is another thread that appears to be reading the same table. Does the sqlite3 step return sqlite_locked in this case? Thanks, Ken Ken wrote: While using the new 3.5.4 sqlite3_enable_shared_cache I ran into a strange lock situation. SQLITE_LOCK is returned from an insert statement, even though the thread/connection performed a successful "begin exclusive" transaction. begin exclusive insert into table... ---> returns SQLITE_LOCKED Is it possible for both connections to begin exclusive transactions whilst having the shared cache anabled? Thanks, ken - - --- To unsubscribe, send email to [EMAIL PROTECTED] - - --- -- --- To unsubscribe, send email to [EMAIL PROTECTED] -- --- - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Explain query plan
On 20-Dec-2007, at 3:02 PM, Kees Nuyt wrote: You will get much more detail with EXPLAIN SELECT ... It shows the VDBE code, which looks cryptic at first but will prove really informative. I'm still at the cryptic phase, but I'll figure it out. Thanks for confirming my suspicions about EXPLAIN QUERY PLAN. Still very useful, though. :) - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] shared cache mode locking
Ed, Sorry if I confused you, a "Write" lock is really an EXCLUSIVE lock per sqlite documentation. I used the two interchangeably, pardon my error. A begin exclusive indicates the beginning of a transaction, It escalates the database lock to an EXCLUSIVE lock. The begin transaction does not immediately do this, rather it waits until the buffer cache spills to disk. At this point it attempts to escalate the Reserved lock to a Pending then an Exclusive lock. There is only 1 type of EXCLUSIVE (write) lock, It is database wide and is all or nothing. Once you have the lock, it prevents other access to the DB. Ken Ed Pasma <[EMAIL PROTECTED]> wrote: The ticket has already been resolved, I see. So it has been considered a bug. In my earlier reply I tried to defend the current behavour to be in line with the document, http://sqlite.org/ sharedcache.html. I'm happy to change my mind now. Only I miss something in the model as described in the document. This may either be: - exclusive transactions as a new kind of transactions, apart form read- and write-transactions or - database-level locking as a new level above transaction-level locking. May be this suggestion is too naive, anyway it helps me explain the wonderful cache sharing. Ken wrote: > Ed, > > Dan opened a ticket. I agree the documentation isn't clear on the > Exlusive locking state. > > Not really sure, if this is by design or a bug at this stage. I do > think its a great feature of the Shared cache mode to allow table > level locking. But I'm curious with this table level locking what > would happen if two threads performed writes to two seperate tables > concurrently using only a begin immediate. > > Thread a writes to tab1, > Thread b writes to tab2, (Is this allowed ? or is a sqlite_locked > kicked returned?) > > If it is allowed then would there be two journal files concurrently > existing? And What happens during a crash with two journals ? > > This gets complicated very quickly. > > Ken > > Ed Pasma wrote: Hello,` > Empirically I found that it is exactly true. > Must admit I'm confused but may it is in line with the Shared-Cache > locking model. > This does not mention the EXCLUSIVE locking state. > The most 'secure' locking state it mentions is a write-transaction > and this can coexist with read-transactions from others. > Thus "begin exclusive" starts a write-transaction and the on-going > read does not interfere. > The error message seems to clarify the situation further: database > table is locked. Thus the collision occurs at the table-level. And > yes, taking different tables for read and write, it does not occur. > Practically this may not help very much. But may be the following > does in case you have a busy_timeout setting. > When having Shared-Cache mode enabled, the timeout setting appears to > be ignored by SQLite. This makes locking situations surface rather > soon, also when there is no dead-lock. > The situation may be handled by a programmatic retry? > Regards, Ed > > Op 19-dec-2007, om 19:12 heeft Ken het volgende geschreven: > >> Some additional info: >> >> when the sqlite_lock is returned there is another thread that >> appears to be reading the same table. Does the sqlite3 step return >> sqlite_locked in this case? >> >> Thanks, >> Ken >> >> >> Ken wrote: >> While using the new 3.5.4 sqlite3_enable_shared_cache I ran into a >> strange lock situation. >> >> SQLITE_LOCK is returned from an insert statement, even though >> the thread/connection performed a successful "begin exclusive" >> transaction. >> >>begin exclusive >> insert into table... ---> returns SQLITE_LOCKED >> >> Is it possible for both connections to begin exclusive transactions >> whilst having the shared cache anabled? >> >> Thanks, >> ken >> >> > > > > -- > --- > To unsubscribe, send email to [EMAIL PROTECTED] > -- > --- > > - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] extremely differing speed on :memory: queries depending on initial db size
hey all, thnx for your replies so far. some more details to get into it: first of all i'm coding a .net application using the ado.net 2.0 provider by robert simpson. i posted there as well but thought it would be more of a core sqlite problem (suspecting memory handling by sqlite or os or ..), maybe i'm wrong :) version info is 1.0.47.1 for the provider containing sqlite 3.5.3 my dev machine runs win xp pro sp2 i detached the file database before running the queries against the :memory: as stated i already tried it without indices not affecting speed in a noticeable manner i dont know anything about sqlites internal structures. to clone the file db into memory i used the method provided on http://www.sqlite.org/cvstrac/wiki?p=InMemoryDatabase (adapted the Tcl-Implementation sample) i'm using rather simple update, insert and delete statements - they are all about equally slow samples (only executing these i run into the performance trap too): INSERT INTO [Broadcast] ( [id], [channelId], [title], [startTime], [endTime] ) VALUES ( @id, @channelId, @title, @startTime, @endTime ) UPDATE [Broadcast] SET [Weekday] = @Weekday, [StartHour] = @StartHour, [BroadcastDay] = @BroadcastDay WHERE [Id] = @Id UPDATE [Broadcast] SET [channelId] = @channelId, [title] = @title, [startTime] = @startTime, [endTime] = @endTime WHERE [Id] = @Id the data consists of text and datetime values as well as binary data (small images around 15k each), each using ~half of db space CPU usage is almost always around 100% a test case wont be too easy to create, this would be ultima ratio to me do i have to go ultima? :) if it helps, the broadcast-table creation sql (denormalization is intended and shouldnt influence the queries above): CREATE TABLE IF NOT EXISTS Broadcast ( Id INTEGER PRIMARY KEY NOT NULL, Titleid INTEGER DEFAULT NULL, Title TEXT DEFAULT NULL COLLATE NOCASE, RegionId INTEGER DEFAULT NULL, ChannelId INTEGER DEFAULT NULL, StartTime DATETIME DEFAULT NULL, EndTime DATETIME DEFAULT NULL, SeriesId INTEGER DEFAULT NULL, Season INTEGER DEFAULT NULL, Episode INTEGER DEFAULT NULL, Vps DATETIME DEFAULT NULL, CategoryId INTEGER DEFAULT NULL, Genre1Id INTEGER DEFAULT NULL, Genre2Id INTEGER DEFAULT NULL, TipType TINYINT DEFAULT NULL, TitleOriginal TEXT DEFAULT NULL, Subtitle TEXT DEFAULT NULL, Description TEXT DEFAULT NULL COLLATE NOCASE, DescriptionShort TEXT DEFAULT NULL COLLATE NOCASE, HasMedia TINYINT DEFAULT NULL, HasRating TINYINT DEFAULT NULL, AgeMarkIds NVARCHAR(64) DEFAULT NULL, ProductionYearValues NVARCHAR(512) DEFAULT NULL, ProductionCountryIds NVARCHAR(64) DEFAULT NULL, OverallRating TINYINT DEFAULT NULL, TechnicalAttributeIds NVARCHAR(64) DEFAULT NULL, People TEXT DEFAULT NULL, Weekday TINYINT DEFAULT NULL, StartHour TINYINT DEFAULT NULL, BroadcastDay INTEGER DEFAULT NULL ); -- GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS. Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: database column attributes
arbalest06 <[EMAIL PROTECTED]> wrote: i got it working already..thank you so much!..would this check(length(a) <= 3 also work if the datatype is integer? If you want to limit an integer, just write check(a <= maxValue) Igor Tandetnik - To unsubscribe, send email to [EMAIL PROTECTED] -
RE: [sqlite] database column attributes
Answers in line. > -Original Message- > From: arbalest06 [mailto:[EMAIL PROTECTED] > Sent: Friday, December 21, 2007 9:23 AM > To: sqlite-users@sqlite.org > Subject: [sqlite] database column attributes > > > > good day! > > i want to create a database with a table that has a column which is a > TEXT..i want to specify the maximum length of the text..but i cant do > TEXT(20)( ie., 20 characters max )..how can i do this?.. > MyText VARCHAR(20) (SQLite will not enforce the column size limit. That's the programmer's job:-) > also i would like to specify that this column should be a > required field..so > if the column was not given a value during insert, it would > return an sqlite > error code..how can i implement this?.. > MyText VARCHAR(20) Not Null (SQLite will enforce this one.) > thanx and God bless! > -- > View this message in context: > http://www.nabble.com/database-column-attributes-tp14457206p14 457206.html Sent from the SQLite mailing list archive at Nabble.com. - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] database column attributes
i got it working already..thank you so much!..would this check(length(a) <= 3 also work if the datatype is integer? Igor Tandetnik wrote: > > arbalest06 <[EMAIL PROTECTED]> wrote: >> i want to create a database with a table that has a column which is a >> TEXT..i want to specify the maximum length of the text..but i cant do >> TEXT(20)( ie., 20 characters max )..how can i do this?.. > > create table t (a text check(length(a) <= 20)); > >> also i would like to specify that this column should be a required >> field..so if the column was not given a value during insert, it would >> return an sqlite error code..how can i implement this?.. > > create table t (a text check(length(a) <= 20) not null); > > Igor Tandetnik > > - > To unsubscribe, send email to [EMAIL PROTECTED] > - > > > -- View this message in context: http://www.nabble.com/database-column-attributes-tp14457206p14457666.html Sent from the SQLite mailing list archive at Nabble.com. - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Re: database column attributes
Igor Tandetnik wrote: arbalest06 <[EMAIL PROTECTED]> wrote: i want to create a database with a table that has a column which is a TEXT..i want to specify the maximum length of the text..but i cant do TEXT(20)( ie., 20 characters max )..how can i do this?.. create table t (a text check(length(a) <= 20)); also i would like to specify that this column should be a required field..so if the column was not given a value during insert, it would return an sqlite error code..how can i implement this?.. create table t (a text check(length(a) <= 20) not null); You might also want to add a minimum length check to prevent users from entering an empty string (i.e. length of zero with no text, but still not a null value). create table t (a text not null check(length(a) >= 0 and length(a) <= 20)); HTH Dennis Cote - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] database column attributes
Is there a reason why you want to absolutely specify the maximum length for your text column? SQLite handles variable length text pretty well. If you want to create a column that needs to be filled in for each record, just use the "NOT NULL" keyword. See : http://www.sqlite.org/lang.html and http://www.sqlite.org/lang_createtable.html CREATE TABLE monthy ( pyhton TEXT NOT NULL, -- Column constraint holy_grail TEXT CHECK(length(holy_grail) <= 20) NOT NULL ); ganbatte kudassai! Samuel On 2007-12-21 10:23, arbalest06 wrote: > good day! > > i want to create a database with a table that has a column which is a > TEXT..i want to specify the maximum length of the text..but i cant do > TEXT(20)( ie., 20 characters max )..how can i do this?.. > > also i would like to specify that this column should be a required > field..so if the column was not given a value during insert, it would > return an sqlite error code..how can i implement this?.. > > thanx and God bless! - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] extremely differing speed on :memory: queries depending on initial db size
[EMAIL PROTECTED] wrote: i got a question concerning in-memory-behaviour of sqlite. this is what i'm trying to do: i load an existing file db into a memory one (create a :memory:-connection, attaching the file db which has ~ 60 mb and then copying its tables and indices and detaching the file db - this works quite great so far, lasts around 5 secs). now i begin a transaction on the in-memory-db and start executing my queries (some 10k inserts, updates and deletes). and this process is horribly (sic!) slow. in comparison i did all the inserts into an empty memory-db (without cloning the whole file db first) which is faster by around factor 1000... anyone has a clue what happens there? i switched off windows virtual memory paging - the same effect. disk io is near zero, it's really the db commands only that causes this huge difference... working on the file db is much faster than on memory given the same process. do i have to pragma sth special (tried almost everything here, too...) In tests I have done I found that the :memory: databases are slightly slower than file based databases for exactly the same operations, as long as the database size is fairly small. I believe this is true as long as the entire file database fits in the OS disk cache. I suspect the time difference is due to the OS cache paging code, which is highly optimized, being slightly faster than SQLite's memory paging code. You are seeing a factor of 1000 difference between inserts into an *empty* file database and a 60 MB memory database when doing 10K inserts, updates, and deletes. That is probably almost entirely due to the extra time it takes to modify indexes on the database tables after the database is populated. You need to compare the time to these inserts into an empty file database wiith the time it takes to do the same inserts into an empty memory database. If these times are similar as I suspect, then you might want to try dropping your indexes (if they aren't needed for doingthe inserts) before the inserts, inserting the new data, and then re-creating the indexes after all the new data is added. HTH Dennis Cote - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: database column attributes
arbalest06 <[EMAIL PROTECTED]> wrote: i want to create a database with a table that has a column which is a TEXT..i want to specify the maximum length of the text..but i cant do TEXT(20)( ie., 20 characters max )..how can i do this?.. create table t (a text check(length(a) <= 20)); also i would like to specify that this column should be a required field..so if the column was not given a value during insert, it would return an sqlite error code..how can i implement this?.. create table t (a text check(length(a) <= 20) not null); Igor Tandetnik - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] extremely differing speed on :memory: queries depending on initial db size
Are you sure the structure on disk is the same as in memory? My initial guess is that you have some indexes that need to be updated when you execute your inserts/updates. However, if the structure is exactly the same on disk and in memory I'm left clueless. Can you provide me with a copy of the data and the code that does the inserts? If so, I would test to see if I get the same behaviour on Linux x86 and x86_64. Samuel On 2007-12-21 10:13, [EMAIL PROTECTED] wrote: > hi all, > > i got a question concerning in-memory-behaviour of sqlite. > > this is what i'm trying to do: > > i load an existing file db into a memory one (create a :memory:-connection, > attaching the file db which has ~ 60 mb and then copying its tables and > indices and detaching the file db - this works quite great so far, lasts > around 5 secs). now i begin a transaction on the in-memory-db and start > executing my queries (some 10k inserts, updates and deletes). and this > process is horribly (sic!) slow. > > in comparison i did all the inserts into an empty memory-db (without > cloning the whole file db first) which is faster by around factor 1000... > > anyone has a clue what happens there? i switched off windows virtual memory > paging - the same effect. disk io is near zero, it's really the db commands > only that causes this huge difference... working on the file db is much > faster than on memory given the same process. do i have to pragma sth > special (tried almost everything here, too...) > > thnx for your help, > pong *stuck - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] extremely differing speed on :memory: queries depending on initial db size
[EMAIL PROTECTED] wrote: > hi all, > > i got a question concerning in-memory-behaviour of sqlite. > > this is what i'm trying to do: > > i load an existing file db into a memory one (create a :memory:-connection, > attaching the file db which has ~ 60 mb and then copying its tables and > indices and detaching the file db - this works quite great so far, lasts > around 5 secs). now i begin a transaction on the in-memory-db and start > executing my queries (some 10k inserts, updates and deletes). and this > process is horribly (sic!) slow. > > in comparison i did all the inserts into an empty memory-db (without cloning > the whole file db first) which is faster by around factor 1000... > > anyone has a clue what happens there? I don't have any ideas. Please provide more clues. What version of SQLite are you using. Did you DETACH the disk database before running the queries. What kind of queries are you using? What kind of data is in your database. Can you post examples? -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] database column attributes
good day! i want to create a database with a table that has a column which is a TEXT..i want to specify the maximum length of the text..but i cant do TEXT(20)( ie., 20 characters max )..how can i do this?.. also i would like to specify that this column should be a required field..so if the column was not given a value during insert, it would return an sqlite error code..how can i implement this?.. thanx and God bless! -- View this message in context: http://www.nabble.com/database-column-attributes-tp14457206p14457206.html Sent from the SQLite mailing list archive at Nabble.com. - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] extremely differing speed on :memory: queries depending on initial db size
hi all, i got a question concerning in-memory-behaviour of sqlite. this is what i'm trying to do: i load an existing file db into a memory one (create a :memory:-connection, attaching the file db which has ~ 60 mb and then copying its tables and indices and detaching the file db - this works quite great so far, lasts around 5 secs). now i begin a transaction on the in-memory-db and start executing my queries (some 10k inserts, updates and deletes). and this process is horribly (sic!) slow. in comparison i did all the inserts into an empty memory-db (without cloning the whole file db first) which is faster by around factor 1000... anyone has a clue what happens there? i switched off windows virtual memory paging - the same effect. disk io is near zero, it's really the db commands only that causes this huge difference... working on the file db is much faster than on memory given the same process. do i have to pragma sth special (tried almost everything here, too...) thnx for your help, pong *stuck -- Pt! Schon vom neuen GMX MultiMessenger gehört? Der kann`s mit allen: http://www.gmx.net/de/go/multimessenger?did=10 - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] shared cache mode locking
The ticket has already been resolved, I see. So it has been considered a bug. In my earlier reply I tried to defend the current behavour to be in line with the document, http://sqlite.org/ sharedcache.html. I'm happy to change my mind now. Only I miss something in the model as described in the document. This may either be: - exclusive transactions as a new kind of transactions, apart form read- and write-transactions or - database-level locking as a new level above transaction-level locking. May be this suggestion is too naive, anyway it helps me explain the wonderful cache sharing. Ken wrote: Ed, Dan opened a ticket. I agree the documentation isn't clear on the Exlusive locking state. Not really sure, if this is by design or a bug at this stage. I do think its a great feature of the Shared cache mode to allow table level locking. But I'm curious with this table level locking what would happen if two threads performed writes to two seperate tables concurrently using only a begin immediate. Thread a writes to tab1, Thread b writes to tab2, (Is this allowed ? or is a sqlite_locked kicked returned?) If it is allowed then would there be two journal files concurrently existing? And What happens during a crash with two journals ? This gets complicated very quickly. Ken Ed Pasma <[EMAIL PROTECTED]> wrote: Hello,` Empirically I found that it is exactly true. Must admit I'm confused but may it is in line with the Shared-Cache locking model. This does not mention the EXCLUSIVE locking state. The most 'secure' locking state it mentions is a write-transaction and this can coexist with read-transactions from others. Thus "begin exclusive" starts a write-transaction and the on-going read does not interfere. The error message seems to clarify the situation further: database table is locked. Thus the collision occurs at the table-level. And yes, taking different tables for read and write, it does not occur. Practically this may not help very much. But may be the following does in case you have a busy_timeout setting. When having Shared-Cache mode enabled, the timeout setting appears to be ignored by SQLite. This makes locking situations surface rather soon, also when there is no dead-lock. The situation may be handled by a programmatic retry? Regards, Ed Op 19-dec-2007, om 19:12 heeft Ken het volgende geschreven: Some additional info: when the sqlite_lock is returned there is another thread that appears to be reading the same table. Does the sqlite3 step return sqlite_locked in this case? Thanks, Ken Ken wrote: While using the new 3.5.4 sqlite3_enable_shared_cache I ran into a strange lock situation. SQLITE_LOCK is returned from an insert statement, even though the thread/connection performed a successful "begin exclusive" transaction. begin exclusive insert into table... ---> returns SQLITE_LOCKED Is it possible for both connections to begin exclusive transactions whilst having the shared cache anabled? Thanks, ken -- --- To unsubscribe, send email to [EMAIL PROTECTED] -- --- - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] table list
Ged Murphy schrieb: How can I get a list of all tables programmatically in a given SQLite database? Do a SELECT on the sqlite_master table and pick the info you need. Michael - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] table list
How can I get a list of all tables programmatically in a given SQLite database? Thanks, Ged. - To unsubscribe, send email to [EMAIL PROTECTED] -