Sorry, something went wrong on the previous post.

>From the docs I read that for the new version:

a.. When new tables are created using CREATE TABLE ... AS SELECT ... the 
datatype of the columns is the simplified SQLite datatype (TEXT, INT, REAL, 
NUMERIC, or BLOB) instead of a copy of the original datatype from the source 
table.

I don't know why this has been done, as far as I can tell this will cause 
trouble in several wrappers. I.e. the Delphi wrappers depend on regular 
datatypes like varchar(xx), number etc. If SQLite will 'translate' this into 
other datatypes this will cause inconsistancies. If this is necessary why 
not then just allow the primitive datatypes within the 'create'? Then the 
reported datatype will be the same as the datatypes uses within the create. 
What will happen on a create table when the datatype is other then the 
primitives? How does SQLite translate these? Or am I completely 
misundertanding things?

albert

----- Original Message ----- 
From: <sqlite-users-requ...@sqlite.org>
To: <sqlite-users@sqlite.org>
Sent: Tuesday, June 16, 2009 2:38 PM
Subject: sqlite-users Digest, Vol 18, Issue 63


> Send sqlite-users mailing list submissions to
> sqlite-users@sqlite.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> or, via email, send a message with subject or body 'help' to
> sqlite-users-requ...@sqlite.org
>
> You can reach the person managing the list at
> sqlite-users-ow...@sqlite.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of sqlite-users digest..."
>
>
> Today's Topics:
>
>   1. Database inserts gradually slowing down (Jens P?ll Hafsteinsson)
>   2. Re: Database inserts gradually slowing down (hiral)
>   3. Re: Database inserts gradually slowing down (hiral)
>   4. Re: Database inserts gradually slowing down (Pavel Ivanov)
>   5. Re: SQlite3 - SQL injection using ruby (John Elrick)
>   6. Datatypes (A Drent)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 16 Jun 2009 12:06:33 +0000
> From: Jens P?ll Hafsteinsson <j...@lsretail.com>
> Subject: [sqlite] Database inserts gradually slowing down
> To: "sqlite-users@sqlite.org" <sqlite-users@sqlite.org>
> Message-ID:
> <66ed19f93cca594894a9bd4c9ac5b3a32c83bc1...@palli.last.local>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi
>
> I've been running some tests against sqlite and have found that inserts 
> are gradually slowing down. Since I'm new to sqlite I might be doing 
> something wrong and thought I'd ask here if anyone has seen this before or 
> know what might be causing this.
>
> The test I'm running creates a database containing a single table (int, 
> int, varchar(100)) along with an index (not unique) on the first field. I 
> then perform the following operations (all records have unique data in the 
> first field):
>
>
> 1.       start a transaction
>
> 2.       insert 1000 records
>
> 3.       commit
>
> 4.       repeat steps 1-3 100 times
>
> 5.       delete everything from the table
>
> 6.       Start again at step 1
>
> The time taken to execute steps 1-4 increases gradually from taking around 
> 16 seconds the first time to just over 28 seconds after about 2400 
> iterations. To me, this is alarming since this time increase seems not to 
> be asymptotic (not sure though, there is a slight curve on the graph and I 
> haven't done any curve fitting) and I fear that I will end up with an 
> unusable slow database after some time of use. Even if I shut down the 
> test application and start again (on the same database and without 
> re-creating the table), it just continues running as before it was 
> stopped, that is, taking 28 seconds (and increasing) if I stop it after 
> 2400 iterations.
>
> At first I ran this for shorter periods without an index and think I saw 
> the same behavior, but I have to verify that to be sure.
>
> I'm using sqlite version 3.6.14 on Windows XP, compiled using Visual 
> Studio 2008.
>
> If anyone is interested I can e-mail the collected data and the test 
> program I'm using.
>
> Any pointers appreciated.
>
> Thanks,
> JP
>
> ---------------------------------------------------------------------
> "What you take away is more important than what you add." Chuck Moore
>
>
>
> ------------------------------
>
> Message: 2
> Date: Tue, 16 Jun 2009 17:39:01 +0530
> From: hiral <hiralsmaill...@gmail.com>
> Subject: Re: [sqlite] Database inserts gradually slowing down
> To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
> Message-ID:
> <b1f4b3d0906160509p6ddcdf03tbd406e733996a...@mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hi,
>
> Can you please let me know which version of sqlite, are you using?
>
> Thank you.
> -Hiral
>
> On Tue, Jun 16, 2009 at 5:36 PM, Jens P?ll Hafsteinsson
> <j...@lsretail.com>wrote:
>
>> Hi
>>
>> I've been running some tests against sqlite and have found that inserts 
>> are
>> gradually slowing down. Since I'm new to sqlite I might be doing 
>> something
>> wrong and thought I'd ask here if anyone has seen this before or know 
>> what
>> might be causing this.
>>
>> The test I'm running creates a database containing a single table (int,
>> int, varchar(100)) along with an index (not unique) on the first field. I
>> then perform the following operations (all records have unique data in 
>> the
>> first field):
>>
>>
>> 1.       start a transaction
>>
>> 2.       insert 1000 records
>>
>> 3.       commit
>>
>> 4.       repeat steps 1-3 100 times
>>
>> 5.       delete everything from the table
>>
>> 6.       Start again at step 1
>>
>> The time taken to execute steps 1-4 increases gradually from taking 
>> around
>> 16 seconds the first time to just over 28 seconds after about 2400
>> iterations. To me, this is alarming since this time increase seems not to 
>> be
>> asymptotic (not sure though, there is a slight curve on the graph and I
>> haven't done any curve fitting) and I fear that I will end up with an
>> unusable slow database after some time of use. Even if I shut down the 
>> test
>> application and start again (on the same database and without re-creating
>> the table), it just continues running as before it was stopped, that is,
>> taking 28 seconds (and increasing) if I stop it after 2400 iterations.
>>
>> At first I ran this for shorter periods without an index and think I saw
>> the same behavior, but I have to verify that to be sure.
>>
>> I'm using sqlite version 3.6.14 on Windows XP, compiled using Visual 
>> Studio
>> 2008.
>>
>> If anyone is interested I can e-mail the collected data and the test
>> program I'm using.
>>
>> Any pointers appreciated.
>>
>> Thanks,
>> JP
>>
>> ---------------------------------------------------------------------
>> "What you take away is more important than what you add." Chuck Moore
>>
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
> ------------------------------
>
> Message: 3
> Date: Tue, 16 Jun 2009 17:39:58 +0530
> From: hiral <hiralsmaill...@gmail.com>
> Subject: Re: [sqlite] Database inserts gradually slowing down
> To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
> Message-ID:
> <b1f4b3d0906160509h482a9c73r6599cc1b27796...@mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Sorry I got it.
> Did you tried with 3.6.14.2.
>
> Thank you.
> -Hiral
>
> On Tue, Jun 16, 2009 at 5:39 PM, hiral <hiralsmaill...@gmail.com> wrote:
>
>> Hi,
>>
>> Can you please let me know which version of sqlite, are you using?
>>
>> Thank you.
>> -Hiral
>>
>>   On Tue, Jun 16, 2009 at 5:36 PM, Jens P?ll Hafsteinsson <
>> j...@lsretail.com> wrote:
>>
>>> Hi
>>>
>>> I've been running some tests against sqlite and have found that inserts
>>> are gradually slowing down. Since I'm new to sqlite I might be doing
>>> something wrong and thought I'd ask here if anyone has seen this before 
>>> or
>>> know what might be causing this.
>>>
>>> The test I'm running creates a database containing a single table (int,
>>> int, varchar(100)) along with an index (not unique) on the first field. 
>>> I
>>> then perform the following operations (all records have unique data in 
>>> the
>>> first field):
>>>
>>>
>>> 1.       start a transaction
>>>
>>> 2.       insert 1000 records
>>>
>>> 3.       commit
>>>
>>> 4.       repeat steps 1-3 100 times
>>>
>>> 5.       delete everything from the table
>>>
>>> 6.       Start again at step 1
>>>
>>> The time taken to execute steps 1-4 increases gradually from taking 
>>> around
>>> 16 seconds the first time to just over 28 seconds after about 2400
>>> iterations. To me, this is alarming since this time increase seems not 
>>> to be
>>> asymptotic (not sure though, there is a slight curve on the graph and I
>>> haven't done any curve fitting) and I fear that I will end up with an
>>> unusable slow database after some time of use. Even if I shut down the 
>>> test
>>> application and start again (on the same database and without 
>>> re-creating
>>> the table), it just continues running as before it was stopped, that is,
>>> taking 28 seconds (and increasing) if I stop it after 2400 iterations.
>>>
>>> At first I ran this for shorter periods without an index and think I saw
>>> the same behavior, but I have to verify that to be sure.
>>>
>>> I'm using sqlite version 3.6.14 on Windows XP, compiled using Visual
>>> Studio 2008.
>>>
>>> If anyone is interested I can e-mail the collected data and the test
>>> program I'm using.
>>>
>>> Any pointers appreciated.
>>>
>>> Thanks,
>>> JP
>>>
>>> ---------------------------------------------------------------------
>>> "What you take away is more important than what you add." Chuck Moore
>>>
>>> _______________________________________________
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>
>>
>
>
> ------------------------------
>
> Message: 4
> Date: Tue, 16 Jun 2009 08:14:50 -0400
> From: Pavel Ivanov <paiva...@gmail.com>
> Subject: Re: [sqlite] Database inserts gradually slowing down
> To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
> Message-ID:
> <f3d9d2130906160514x76d166a5q7210b16ef11dc...@mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> How do you do step 5? Like "delete from table" or "delete from table
> where ..."? Do you see any degrade in disk space used by database
> along with slowness?
>
> Pavel
>
> On Tue, Jun 16, 2009 at 8:06 AM, Jens P?ll
> Hafsteinsson<j...@lsretail.com> wrote:
>> Hi
>>
>> I've been running some tests against sqlite and have found that inserts 
>> are gradually slowing down. Since I'm new to sqlite I might be doing 
>> something wrong and thought I'd ask here if anyone has seen this before 
>> or know what might be causing this.
>>
>> The test I'm running creates a database containing a single table (int, 
>> int, varchar(100)) along with an index (not unique) on the first field. I 
>> then perform the following operations (all records have unique data in 
>> the first field):
>>
>>
>> 1. ? ? ? start a transaction
>>
>> 2. ? ? ? insert 1000 records
>>
>> 3. ? ? ? commit
>>
>> 4. ? ? ? repeat steps 1-3 100 times
>>
>> 5. ? ? ? delete everything from the table
>>
>> 6. ? ? ? Start again at step 1
>>
>> The time taken to execute steps 1-4 increases gradually from taking 
>> around 16 seconds the first time to just over 28 seconds after about 2400 
>> iterations. To me, this is alarming since this time increase seems not to 
>> be asymptotic (not sure though, there is a slight curve on the graph and 
>> I haven't done any curve fitting) and I fear that I will end up with an 
>> unusable slow database after some time of use. Even if I shut down the 
>> test application and start again (on the same database and without 
>> re-creating the table), it just continues running as before it was 
>> stopped, that is, taking 28 seconds (and increasing) if I stop it after 
>> 2400 iterations.
>>
>> At first I ran this for shorter periods without an index and think I saw 
>> the same behavior, but I have to verify that to be sure.
>>
>> I'm using sqlite version 3.6.14 on Windows XP, compiled using Visual 
>> Studio 2008.
>>
>> If anyone is interested I can e-mail the collected data and the test 
>> program I'm using.
>>
>> Any pointers appreciated.
>>
>> Thanks,
>> JP
>>
>> ---------------------------------------------------------------------
>> "What you take away is more important than what you add." Chuck Moore
>>
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
> ------------------------------
>
> Message: 5
> Date: Tue, 16 Jun 2009 08:21:59 -0400
> From: John Elrick <john.elr...@fenestra.com>
> Subject: Re: [sqlite] SQlite3 - SQL injection using ruby
> To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
> Message-ID: <4a378e67.9040...@fenestra.com>
> Content-Type: text/plain; charset=UTF-8; format=flowed
>
> dave lilley wrote:
>> Many thanks to all who have replied,
>>
>> I know understand the difference and shall use that approach to creating 
>> my
>> queries.
>>
>> regarding the "ruby way" it was more how I saw saving code typing by
>> injection different table, field and user data into one query thus saving
>> typing.
>>
>> BUT in the interests of eliminating/ reducing SQL injection I shall pass 
>> the
>> table name to my method and test against it to select which query to use
>> against that table and use the SQLite3 binding method.
>>
>> e.g.
>> def makesql (tablename, uservar)
>> case tablename
>>
>> when customers == tablename
>>    stmt = select * from customers where cust_nos = ?"
>>
>> when jobs == tablename
>>   stmt = ....
>>
>> end
>>
>> row = db.execute(stmt,uservar)
>>
>>
>
> Unless your user is typing in the table name also, you don't have to go
> that far. Interpolation is fine for system defined variables.
>
>
> John
>
>
> ------------------------------
>
> Message: 6
> Date: Tue, 16 Jun 2009 14:38:44 +0200
> From: "A Drent" <albert.dr...@rug.nl>
> Subject: [sqlite] Datatypes
> To: <sqlite-users@sqlite.org>
> Message-ID: <fb8c37319ffc4aa29794230811bce...@129125168214>
> Content-Type: text/plain; format=flowed; charset="iso-8859-1";
> reply-type=original
>
>>From the new version I read:
>
> ----- Original Message ----- 
> From: <sqlite-users-requ...@sqlite.org>
> To: <sqlite-users@sqlite.org>
> Sent: Tuesday, June 16, 2009 2:00 PM
> Subject: sqlite-users Digest, Vol 18, Issue 62
>
>
>> Send sqlite-users mailing list submissions to
>> sqlite-users@sqlite.org
>>
>> To subscribe or unsubscribe via the World Wide Web, visit
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> or, via email, send a message with subject or body 'help' to
>> sqlite-users-requ...@sqlite.org
>>
>> You can reach the person managing the list at
>> sqlite-users-ow...@sqlite.org
>>
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of sqlite-users digest..."
>>
>>
>> Today's Topics:
>>
>>   1. Re: sqlite3_step performance degredation (Ken)
>>   2. Re: sqlite3_step performance degredation (Simon Slavin)
>>   3. Re: sqlite3_step performance degredation (Jim Wilcoxson)
>>   4. Re: SQlite3 - SQL injection using ruby (dave lilley)
>>   5. Bug in retrieving last rowid? (hartwig.wiesm...@online.nl)
>>   6. Sqlite-3.5.9: getting sqlite_autoindex error (h o)
>>
>>
>> ----------------------------------------------------------------------
>>
>> Message: 1
>> Date: Mon, 15 Jun 2009 14:33:04 -0700 (PDT)
>> From: Ken <kennethinbox-sql...@yahoo.com>
>> Subject: Re: [sqlite] sqlite3_step performance degredation
>> To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
>> Message-ID: <780955.31349...@web81003.mail.mud.yahoo.com>
>> Content-Type: text/plain; charset=iso-8859-1
>>
>>
>> Also is there an index on the table B.ID field?
>>
>> --- On Mon, 6/15/09, Mike Borland <mike.borl...@cygnetscada.com> wrote:
>>
>>> From: Mike Borland <mike.borl...@cygnetscada.com>
>>> Subject: Re: [sqlite] sqlite3_step performance degredation
>>> To: "General Discussion of SQLite Database" <sqlite-users@sqlite.org>
>>> Date: Monday, June 15, 2009, 4:11 PM
>>> Nuno, unfortunately your psychic
>>> skills are a bit off on this one.? Sorry I wasn't more
>>> explicit.? I am not using any LIMIT or OFFSET to do any
>>> virtual scrolling.? Basically I have table A which has
>>> 900 rows.? Table B has 180,000 rows (900 * 200) which
>>> has a foreign key relationship back to table A.? So for
>>> each row in table A, there are 200 rows in table B.? My
>>> query is basically a "SELECT * FROM Table B WHERE ID =
>>> TableA.ID".? I'm executing this query 900 times, once
>>> for each row in table A.?
>>>
>>> When I start the 900 read iterations (always in the same
>>> order), the first one generally reads in about 50ms and by
>>> the last read, it's taking roughly 1000ms.? Sometimes
>>> it slows down immediately, sometimes after the 100th
>>> iteration.? The only absolutely reproducible aspect is
>>> that it always slows down eventually and once it slows down,
>>> it never speeds back up.? I don't believe it's a
>>> locking issue since my timer doesn't start until the query
>>> is successfully executed.
>>>
>>> Any ideas?? Would the occasional write operation in
>>> the midst of these reads cause any permanent slow down to
>>> the read time?? Thanks.
>>>
>>> Mike Borland
>>>
>>> -----Original Message-----
>>> From: Nuno Lucas [mailto:ntlu...@gmail.com]
>>>
>>> Sent: Friday, June 12, 2009 7:16 PM
>>> To: General Discussion of SQLite Database
>>> Subject: Re: [sqlite] sqlite3_step performance degredation
>>>
>>> On Sat, Jun 13, 2009 at 1:52 AM, Mike
>>> Borland<mike.borl...@cygnetscada.com>
>>> wrote:
>>> > I have a fairly complex program where I am seeing a
>>> performance
>>> > degradation of the sqlite3_step() function.
>>> ?Basically I'm iterating
>>> > roughly 200 rows at a time, over and over. ?The only
>>> work happening
>>> > during the iteration is I'm copying the record into an
>>> array. ?At first,
>>> > sqlite3_step() takes less than a millisecond to run.
>>> ?After 0-50
>>> > iterations, it's taking anywhere from 10-100ms.
>>> >
>>> > Does anybody have any insight into what's happening
>>> behind the scenes
>>> > with this function to help me track down the cause?
>>> ?I appreciate it!
>>>
>>> You should explicitly say what your SQL query is. Without
>>> that we can
>>> only guess.
>>>
>>> My current "psychic" guess is that you are using LIMIT to
>>> obtain those
>>> 200 rows, one "page" at a time, and as you go advancing
>>> "pages" it
>>> becomes slower and slower.
>>> If this is true, then you should re-think your design as
>>> LIMIT just
>>> skips the rows, but it will? "generate" them before,
>>> meaning it
>>> becomes slower as you advance on the offset given.
>>> Look at http://www.sqlite.org/cvstrac/wiki?p=ScrollingCursor
>>> (and
>>> notice the "What not to do" at the end, talking about
>>> "LIMIT" and
>>> "OFFSET").
>>>
>>> If my my psychic abilities are becoming weak, then please
>>> supply your
>>> exact query that is getting slower? (and maybe your
>>> database schema)
>>> and then someone can give you an exact answer.
>>>
>>>
>>> Regards,
>>> ~Nuno Lucas
>>>
>>> >
>>> > Mike Borland
>>> >
>>> > _______________________________________________
>>> > sqlite-users mailing list
>>> > sqlite-users@sqlite.org
>>> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>> >
>>> _______________________________________________
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>> _______________________________________________
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>
>>
>> ------------------------------
>>
>> Message: 2
>> Date: Mon, 15 Jun 2009 22:53:03 +0100
>> From: Simon Slavin <slav...@hearsay.demon.co.uk>
>> Subject: Re: [sqlite] sqlite3_step performance degredation
>> To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
>> Message-ID: <f9a438ee-0752-4922-b882-2b77c1c2e...@hearsay.demon.co.uk>
>> Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
>>
>>
>> On 15 Jun 2009, at 10:11pm, Mike Borland wrote:
>>
>>>
>>> When I start the 900 read iterations (always in the same order), the
>>> first one generally reads in about 50ms and by the last read, it's
>>> taking roughly 1000ms.  Sometimes it slows down immediately,
>>> sometimes after the 100th iteration.
>>
>> Put some code in your application that closes and reopens the database
>> after iteration 500 (or every 100).  See if it speeds up when you've
>> just done that.  If it does, the bug is somewhere in your code or
>> SQLite.  If it doesn't, the bug is somewhere in memory management or
>> disk access.
>>
>> Simon.
>>
>>
>> ------------------------------
>>
>> Message: 3
>> Date: Mon, 15 Jun 2009 18:59:49 -0400
>> From: Jim Wilcoxson <pri...@gmail.com>
>> Subject: Re: [sqlite] sqlite3_step performance degredation
>> To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
>> Message-ID:
>> <c5830b750906151559m29e5f114k39e88f4e36993...@mail.gmail.com>
>> Content-Type: text/plain; charset=ISO-8859-1
>>
>> There was a recent SQLite bug that caused the size of the SQLite cache
>> to shrink in some circumstances, and the longer a program ran, the
>> smaller the cache became.  Maybe you are running into this bug.  IIRC,
>> you had to do an update in the select loop to trigger the bug, so if
>> you're not doing that, maybe this doesn't apply.
>>
>> Jim
>>
>> On 6/15/09, Mike Borland <mike.borl...@cygnetscada.com> wrote:
>>> Nuno, unfortunately your psychic skills are a bit off on this one. 
>>> Sorry
>>> I
>>> wasn't more explicit.  I am not using any LIMIT or OFFSET to do any
>>> virtual
>>> scrolling.  Basically I have table A which has 900 rows.  Table B has
>>> 180,000 rows (900 * 200) which has a foreign key relationship back to
>>> table
>>> A.  So for each row in table A, there are 200 rows in table B.  My query
>>> is
>>> basically a "SELECT * FROM Table B WHERE ID = TableA.ID".  I'm executing
>>> this query 900 times, once for each row in table A.
>>>
>>> When I start the 900 read iterations (always in the same order), the
>>> first
>>> one generally reads in about 50ms and by the last read, it's taking
>>> roughly
>>> 1000ms.  Sometimes it slows down immediately, sometimes after the 100th
>>> iteration.  The only absolutely reproducible aspect is that it always
>>> slows
>>> down eventually and once it slows down, it never speeds back up.  I 
>>> don't
>>> believe it's a locking issue since my timer doesn't start until the 
>>> query
>>> is
>>> successfully executed.
>>>
>>> Any ideas?  Would the occasional write operation in the midst of these
>>> reads
>>> cause any permanent slow down to the read time?  Thanks.
>>>
>>> Mike Borland
>>>
>>> -----Original Message-----
>>> From: Nuno Lucas [mailto:ntlu...@gmail.com]
>>> Sent: Friday, June 12, 2009 7:16 PM
>>> To: General Discussion of SQLite Database
>>> Subject: Re: [sqlite] sqlite3_step performance degredation
>>>
>>> On Sat, Jun 13, 2009 at 1:52 AM, Mike
>>> Borland<mike.borl...@cygnetscada.com> wrote:
>>>> I have a fairly complex program where I am seeing a performance
>>>> degradation of the sqlite3_step() function.  Basically I'm iterating
>>>> roughly 200 rows at a time, over and over.  The only work happening
>>>> during the iteration is I'm copying the record into an array.  At 
>>>> first,
>>>> sqlite3_step() takes less than a millisecond to run.  After 0-50
>>>> iterations, it's taking anywhere from 10-100ms.
>>>>
>>>> Does anybody have any insight into what's happening behind the scenes
>>>> with this function to help me track down the cause?  I appreciate it!
>>>
>>> You should explicitly say what your SQL query is. Without that we can
>>> only guess.
>>>
>>> My current "psychic" guess is that you are using LIMIT to obtain those
>>> 200 rows, one "page" at a time, and as you go advancing "pages" it
>>> becomes slower and slower.
>>> If this is true, then you should re-think your design as LIMIT just
>>> skips the rows, but it will  "generate" them before, meaning it
>>> becomes slower as you advance on the offset given.
>>> Look at http://www.sqlite.org/cvstrac/wiki?p=ScrollingCursor (and
>>> notice the "What not to do" at the end, talking about "LIMIT" and
>>> "OFFSET").
>>>
>>> If my my psychic abilities are becoming weak, then please supply your
>>> exact query that is getting slower  (and maybe your database schema)
>>> and then someone can give you an exact answer.
>>>
>>>
>>> Regards,
>>> ~Nuno Lucas
>>>
>>>>
>>>> Mike Borland
>>>>
>>>> _______________________________________________
>>>> sqlite-users mailing list
>>>> sqlite-users@sqlite.org
>>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>>
>>> _______________________________________________
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>> _______________________________________________
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>
>>
>> -- 
>> Software first.  Software lasts!
>>
>>
>> ------------------------------
>>
>> Message: 4
>> Date: Tue, 16 Jun 2009 18:43:58 +1200
>> From: dave lilley <dgl...@gmail.com>
>> Subject: Re: [sqlite] SQlite3 - SQL injection using ruby
>> To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
>> Message-ID:
>> <82aaacae0906152343k44a5d89ap498c964913a83...@mail.gmail.com>
>> Content-Type: text/plain; charset=ISO-8859-1
>>
>> Many thanks to all who have replied,
>>
>> I know understand the difference and shall use that approach to creating
>> my
>> queries.
>>
>> regarding the "ruby way" it was more how I saw saving code typing by
>> injection different table, field and user data into one query thus saving
>> typing.
>>
>> BUT in the interests of eliminating/ reducing SQL injection I shall pass
>> the
>> table name to my method and test against it to select which query to use
>> against that table and use the SQLite3 binding method.
>>
>> e.g.
>> def makesql (tablename, uservar)
>> case tablename
>>
>> when customers == tablename
>>   stmt = select * from customers where cust_nos = ?"
>>
>> when jobs == tablename
>>  stmt = ....
>>
>> end
>>
>> row = db.execute(stmt,uservar)
>>
>> again many thanks to all,
>>
>>
>> Dave.
>>
>> 2009/6/16 John Elrick <john.elr...@fenestra.com>
>>
>>> dave lilley wrote:
>>> > Many thanks John so if i take that example and push it out so i can
>>> > have
>>> 1
>>> > method that can return a SQL select statement on any table, field and
>>> search
>>> > criteria i would only need to do this?
>>> >
>>> > In ruby it would be ....
>>> >
>>> > make_SQL (table, field, criteria)
>>> >    stmt = "select * from #{table} where #{field} = #{criteria}"
>>> >    row = db.execute(stmt)
>>> > end
>>> >
>>> > and SQLite3 way would be ...
>>> >
>>> > make_SQL(table,field,criteria)
>>> >   stmt = "select * from ? where ? = ?"
>>> >   row = db.execute(stmt)
>>> > end
>>> >
>>> > would this presumtion be correct?
>>> >
>>> >
>>>
>>> No.  You would have to use the table and field names directly:
>>>
>>> def make_SQL(table, field, criteria)
>>>   stmt = "select * from #{table} where #{field} = ?"
>>>   row = db.execute(stmt, criteria)
>>> end
>>>
>>>
>>>
>>> John
>>> _______________________________________________
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>
>>
>> ------------------------------
>>
>> Message: 5
>> Date: Mon, 15 Jun 2009 22:15:23 +0200 (CEST)
>> From: hartwig.wiesm...@online.nl
>> Subject: [sqlite] Bug in retrieving last rowid?
>> To: Sqlite-users@sqlite.org
>> Message-ID: <746050942.97311245096923844.javamail.r...@mailstore13>
>> Content-Type: text/plain; charset=utf-8
>>
>> I have created  Tables A & AS_FTS
>>
>> "create table A (id integer primary key, string Text);"
>>
>> "create virtual table AS_FTS (Name);"
>>
>> and a trigger
>>
>> "insert into A_FTS (rowid,Name) values (New.%@,New.%@);"
>>
>> (and a not shown delete trigger).
>>
>> Now, I enter two strings into tue table: one and two. I delete table 
>> entry
>> one and insert afterwrds two times one again. Finally I check the last
>> inserted rowid using "sqlite_last_insert_rowid". Unfortunately, this is
>> wrong.
>>
>> BTW: other non-FTS insert triggers seem to work. I am using 3.6.14.
>>
>> Any ideas?
>>
>> Hartwig
>>
>>
>> ------------------------------
>>
>> Message: 6
>> Date: Tue, 16 Jun 2009 16:34:30 +0530
>> From: h o <hiralsmaill...@gmail.com>
>> Subject: [sqlite] Sqlite-3.5.9: getting sqlite_autoindex error
>> To: sqlite-users@sqlite.org
>> Message-ID:
>> <b1f4b3d0906160404s62ef7877y5f49df6238650...@mail.gmail.com>
>> Content-Type: text/plain; charset=ISO-8859-1
>>
>> Hi,
>>
>> I am using sqlite-3.5.9 and observing a 'disk image malformed' error
>> nfs, on doing 'PRAGMA integrity_check' I got following messages...
>>
>> SQLite version 3.5.9
>> Enter ".help" for instructions
>> sqlite> PRAGMA integrity_check;
>> *** in database main ***
>> Page 5275 is never used
>> wrong # of entries in index sqlite_autoindex_<table_name>_1
>> sqlite>
>>
>> Can you please let me know what is the problem here.
>>
>> Thank you.
>> -Hiral
>>
>>
>> ------------------------------
>>
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>> End of sqlite-users Digest, Vol 18, Issue 62
>> ********************************************
>>
>
>
>
> ------------------------------
>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
> End of sqlite-users Digest, Vol 18, Issue 63
> ********************************************
> 

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to