On 08/28/2015 09:36 PM, Scott Robison wrote:
> On Fri, Aug 28, 2015 at 3:47 AM, Yahoo! Mail <stefanossofroniou542 at 
> yahoo.com
>> wrote:
>>
>> On 08/26/2015 09:03 PM, Richard Hipp wrote:
>>
>> Time stands still for multiple rows, as long as they are within the
>>> same sqlite3_step() call.  For example, if you run:
>>>
>>>       CREATE TABLE t1(a DATETIME);
>>>       WITH RECURSIVE
>>>          c(x) AS (VALUES(1) UNION ALL SELECT x+1 FROM c WHERE x<1000000000)
>>>       INSERT INTO t1(a) SELECT datetime('now') FROM c;
>>>
>>> The entire INSERT statement will be a single sqlite3_step() call, and
>>> so all billion rows of t1 will get set to the same time.
>>>
>> I have tested this code with version 3.8.10.2 using "DB Browser for
>> SQLite" and it would crash; the same with SQLite Manager that uses the same
>> version. I have decided to use the latest SQLite version available from GNU
>> / Linux Debian testing 64-bit; it's 3.8.11.1 and something strange is
>> happening. I hope I'm the reason behind its rather peculiar behavior.
>>
>> I tested the following code
>>
>> *sqlite> drop table if exists t1; create table t1(a datetime); begin; with
>> recursive c(x) as (values(1) union all select x + 1 from c where x <
>> 1000000000) insert into t1(a) select datetime('now') from c; commit;*
>>
>> It would take ages to finish and that is logical; it's a billion rows it
>> has to insert, even though I have used begin - commit.
>>
>> Now the rather strange behavior would be when I use the up arrow key to
>> reuse the previous code but replacing *1000000000* with *10*. It takes the
>> same time to finish as before and that is not right; I'm dropping the table
>> and it should take nearly 0 milliseconds for 10 rows.
>>
>> Can someone confirm this?
>>
> I have been using 3.8.11, so I just downloaded 3.8.11.1 to test your exact
> query, cut & pasted from above. Here is the output from my session:
>
> X:\>sqlite3
> SQLite version 3.8.11.1 2015-07-29 20:00:57
> Enter ".help" for usage hints.
> Connected to a transient in-memory database.
> Use ".open FILENAME" to reopen on a persistent database.
> sqlite> .timer on
> sqlite>  drop table if exists t1; create table t1(a datetime); begin; with
> recursive c(x) as (values(1) union all select x + 1 from c where x <
> 10000000) insert into t1(a) select datetime('now') from c; commit;
> Run Time: real 16.751 user 16.520506 sys 0.062400
> sqlite>  drop table if exists t1; create table t1(a datetime); begin; with
> recursive c(x) as (values(1) union all select x + 1 from c where x < 10)
> insert into t1(a) select datetime('now') from c; commit;
> Run Time: real 0.387 user 0.374402 sys 0.000000
> sqlite>
>
> Note: I pasted the first query that begins drop table, then used up arrow
> and edited the 10000000 to 10 for the second query. Further, I didn't have
> time to try a full billion, so I settled for 10 million. Also used a
> transient in-memory database. Finally, I am using Windows 7 Professional on
> a box with 16 GiB of RAM.
I tested it with a local database and not with an in-memory one. I 
wanted someone to test it so we could see if it's something that has to 
do with a reasonable disk access restriction after a certain number of 
multiple attempts.

> Do you see the same behavior for a test of 10M rows (that the second line
> takes the same amount of time as the first)?
>

Reply via email to