2016-04-16 14:52 GMT+02:00 R Smith <rsmith at rsweb.co.za>:
>
>
> On 2016/04/16 11:59 AM, Cecil Westerhof wrote:
>
>> I am playing a bit with SQLite. I first had a table with 1E8 elements.
>> When
>> trying to drop this it looked like SQLite got hung. I tried it from DB
>> Browser and a Java program.
>> I just tried it with a table of 1E7 elements. That was dropped in about 13
>> seconds.
>> I will try it again with 1E8 elements, but it takes 4? hours to generated.
>> Is it possible that SQLite has trouble dropping very large tables? It was
>> 5.2 GB. With 1E7 elements the table is 512 MB.
>>
>> The definition of the table:
>> CREATE TABLE testUniqueUUID (
>> UUID blob,
>>
>> PRIMARY KEY(UUID)
>> CHECK(TYPEOF(UUID) = 'blob' AND
>> LENGTH(UUID) = 16 AND
>> SUBSTR(HEX(UUID), 13, 1) == '4' AND
>> SUBSTR(HEX(UUID), 17, 1) IN ('8', '9', 'A', 'B')
>> )
>> );
>>
>>
> I just duplicated the experiment (though I did not include the CHECK
> constraint) on SQLitespeed using standard SQLite library, and here is the
> results for the 10 mil rows (1E+7):
>
>
> -- SQLite version 3.9.2 [ Release: 2015-11-02 ] on SQLitespeed version
> 2.0.2.4.
>
> -- Script Items: 4 Parameter Count: 0
> -- 2016-04-16 14:42:43.333 | [Info] Script Initialized, Started
> executing...
> --
> ================================================================================================
>
> CREATE TABLE testUniqueUUID (
> UUID BLOB PRIMARY KEY
> );
>
> -- Item Stats: Item No: 1 Query Size (Chars):
> 62
> -- VM Work Steps: 43 Rows Modified:
> 0
> -- Full Query Time: 0d 00h 00m and 00.001s
> -- Query Result: Success.
> --
> ------------------------------------------------------------------------------------------------
>
> WITH RndGen(i,RndBlob) AS (
> SELECT 0, (randomblob(16))
> UNION ALL
> SELECT i+1, (randomblob(16)) FROM RndGen WHERE i<10000000
> )
> INSERT INTO testUniqueUUID (UUID) SELECT RndBlob FROM RndGen;
>
>
> -- Item Stats: Item No: 2 Query Size (Chars):
> 199
> -- Result Columns: 0 Result Rows: 0
> -- VM Work Steps: 290000050 Rows Modified:
> 10000001
> -- Full Query Time: 0d 00h 02m and 10.878s
> -- Query Result: Success.
> --
> ------------------------------------------------------------------------------------------------
>
?For me this took about 7 minutes. But I do also more,?
> SELECT UUID FROM testUniqueUUID LIMIT 10;
>
>
> -- UUID
> -- ------------------------------------
> -- 0xA3044750B1A8567E7FD9DACD5C0C64CF
> -- 0xC6C6AAFAE6179E7B28867D5FB6AED7A6
> -- 0x2267D5856D5D7601FA9E0D8A1E6A66BC
> -- 0x63BEB2ECC58EA6D02D30ED27A3A50971
> -- 0x18477B93BD35C7A2ED83010619CA3887
> -- 0x47D7F3284B094CBE3BF6D77DC974F147
> -- 0x77736E93FAFE0436199CE84760A1072A
> -- 0x015E14BEA6D3C889958329CAF9C11F5C
> -- 0x1805A44908518BE6D6DE6BA63B5A9B71
> -- 0xE21DA4DFD367286DE89343FB02B9F8EF
>
> -- Item Stats: Item No: 3 Query Size (Chars):
> 43
> -- Result Columns: 1 Result Rows: 10
> -- VM Work Steps: 48 Rows Modified:
> 0
> -- Full Query Time: 0d 00h 00m and 00.001s
> -- Query Result: Success.
> --
> ------------------------------------------------------------------------------------------------
>
> DROP TABLE testUniqueUUID;
>
> -- Item Stats: Item No: 4 Query Size (Chars):
> 28
> -- VM Work Steps: 149 Rows Modified:
> 0
> -- Full Query Time: 0d 00h 00m and 00.721s
> -- Query Result: Success.
> --
> ------------------------------------------------------------------------------------------------
>
> -- Script Stats: Total Script Execution Time: 0d 00h 02m and
> 11.733s
> -- Total Script Query Time: 0d 00h 02m and
> 11.601s
> -- Total Database Rows Changed: 10000001
> -- Total Virtual-Machine Steps: 290000290
> -- Last executed Item Index: 4
> -- Last Script Error:
> --
> ------------------------------------------------------------------------------------------------
>
> -- 2016-04-16 14:44:55.054 | [Success] Script Success.
>
> As you can see, the INSERT obviously takes some time (even more-so if the
> CHECK constraint is added), but the DROP Table takes almost no time here...
>
?The drop is a very big difference: .7 seconds or 13. Is almost 20 times as
long. Could I be doing something wrong??
> Let me try the 100 million rows, this may take some time - I will post
> again when it is done.
>
?I am curious.
If useful, I could share the program I am using.
--
Cecil Westerhof