On 05/29/2015 02:35 PM, Dominique Devienne wrote:
> On Fri, May 29, 2015 at 8:48 AM, Dan Kennedy <danielk1977 at gmail.com> wrote:
>
>> On 05/29/2015 12:59 PM, Zaumseil Ren? wrote:
>>
>>> On 05/27/2015 11:59 PM, Wade, William wrote:
>>>
>>>> But instead of using a regular SQL transaction to drop all the old
>>>> tables, use the backup API to clobber the existing database with the new
>>>> one.
>>>>
>>>>     https://www.sqlite.org/c3ref/backup_finish.html
>>>>
>>>> Using the backup API, the clobber operation is still done as a regular
>>>> SQLite transaction - so all the locking and notifying of other clients
>>>> gets done right. The amount of IO (and CPU) required should depend on
>>>> the size of the new db only, not the existing db size. And it won't
>>>> matter if the existing db is corrupt or not - as the backup API never
>>>> actually examines the contents of the existing database.
>>>>
>>> Interesting idea. Could this also a solution to my problem described in
>>> the thread "emptying tables"?
>>>
>> Maybe. Using the backup API to clobber one database with another featuring
>> a bunch of empty tables is likely quicker than actually deleting all the
>> data from the original.
>
> Dan, can you please explain quickly why? Does the backup API work at the
> page level for example, and thus is more efficient than normal row level
> ops? Thanks, --DD

That's pretty much it. Backup API just blindly copies pages of data from 
the source database into the target. But running "DROP TABLE" or "DELETE 
FROM" has to, at a minimum, iterate through the b-tree structures and 
add the now unused pages to the database free list.

wrt to this diagram:

   https://www.sqlite.org/arch.html

the backup API works at the Pager level, bypassing B-Tree altogether.

Dan.













> _______________________________________________
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to