Re: [sqlite] Large database backup

2019-08-02 Thread Tammisalo Toni
Thanks for your help! Especially the comment about cache size helped a lot. 
When I reduced the cache size to about 100 pages I actually get it do the 
writing
incrementally in multiple sqlite3_backup_step() calls. With bit more finetuning 
I 
think it will be ok. 

I also had unrelated problem of using same database handle
for backup and some other things at the same time which understandably 
caused additional blocking.
 
After these changes it is now performing very well. Thanks!

  Toni Tammisalo
  ttamm...@iki.fi
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Large database backup

2019-08-01 Thread Dan Kennedy


On 1/8/62 19:49, Tammisalo Toni wrote:

Hi!

I have an application which is using sqlite database in WAL mode. There is a 
need for
periodic backups to a remote site without obstructing the normal operation. 
Both read
and write access is required during the backup. At the moment I have system 
which
first blocks checkpoints as otherwise backup was restarted too often. Backup is 
done
to remote database implemented with sqlite vfs layer.

What happens in my tests is that all sqlite3_backup_step()'s complete without 
actually
writing anything to remote database. Only during last step all pages are 
written. This would
be annoying from progress monitoring point of view. However, as database mutex 
is held
during this time it actually blocks all access to the database for a long 
period of time.
Changing the backup step size does not help as all this happens at last step 
regardless.

So, is this a bug? I'm I doing something wrong? What I was hoping was that 
backup would
not hold database mutex while it is writing to the other database or at least 
allow splitting the
work with sqlite3_backup_step() so that at least some work could be done while 
backup is
in progress. I actually have strong impression that this worked better with 
some older sqlite
version. Currently using 3.27.2.

Relevant part of the stack trace:

#8  0x005dc870 in sqlite3OsWrite (id=0x7fc1a4120f98, pBuf=0x7fc1a47b0e88, 
amt=, offset=)
 at sqlite3.c:9
#9  pager_write_pagelist (pPager=0x7fc1a41216f8, pList=0x7fc1a47c0ec0) at 
sqlite3.c:54971
#10 0x005bb1a5 in sqlite3PagerCommitPhaseOne (pPager=0x7fc1a41216f8, 
zMaster=0x0, noSync=0)
 at sqlite3.c:57050
#11 0x005b968f in sqlite3_backup_step (p=0x7fc1a4056658, nPage=) at sqlite3.c:74033

Seems that all database pages are written out in pager_write_pagelist() in 
single loop.



The backup process writes through the cache of the destination database. 
So data is only written to disk when either (a) the cache is full or (b) 
the transaction is committed by the last sqlite3_backup_step() call. If 
you reduce the size of the cache used by the destination db writing 
should begin in an earlier sqlite3_backup_step() call.


Dan.




Also, I'm open to other suggestions. I was contemplating to just copy the 
database file
directly while WAL checkpoints are not done but I read some comments that 
suggested
that it would not be a safe to do that.

Any help or suggestions would be appreciated!

   Toni Tammisalo
   ttamm...@iki.fi
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Large database backup

2019-08-01 Thread Dominique Devienne
On Thu, Aug 1, 2019 at 5:02 PM Olivier Mascia  wrote:

> > Le 1 août 2019 à 14:49, Tammisalo Toni  a
> écrit :
> > I have an application which is using sqlite database in WAL mode. There
> is a need for
> > periodic backups to a remote site without obstructing the normal
> operation. Both read
> > and write access is required during the backup. At the moment I have
> system which
> > first blocks checkpoints as otherwise backup was restarted too often.
> Backup is done
> > to remote database implemented with sqlite vfs layer.
> > ...
> > Also, I'm open to other suggestions.
>
> You are using WAL mode.
> Have you tried coding your backup as a single step (passing -1 for the
> number of pages to step)?
>
> int status = sqlite3_backup_step(bck, -1);
>
> Or if you really want to call sqlite3_backup_step() incrementally (X pages
> at a time), then do BEGIN [DEFERRED] [TRANSACTION] first.
>
> Your backup copy should then be allowed to proceed from start to finish
> without impacting readers and writers, nor being impacted by them.

You shouldn't see any restart.  Your backup will be a representation of the
> database as it was when the single (or first) sqlite3_backup_step() call
> started.
>

I've little practical experience here on this, but unless I'm mistaken, it
does mean the WAL file cannot be check-pointed,
while readers (including the backup) are still operating (AFAIK). Not a
problem per-se, just something to be aware of. --DD
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Large database backup

2019-08-01 Thread Olivier Mascia
> Le 1 août 2019 à 14:49, Tammisalo Toni  a écrit :
> 
> Hi!
> 
> I have an application which is using sqlite database in WAL mode. There is a 
> need for
> periodic backups to a remote site without obstructing the normal operation. 
> Both read 
> and write access is required during the backup. At the moment I have system 
> which 
> first blocks checkpoints as otherwise backup was restarted too often. Backup 
> is done 
> to remote database implemented with sqlite vfs layer.
> ...
> Also, I'm open to other suggestions.

You are using WAL mode.
Have you tried coding your backup as a single step (passing -1 for the number 
of pages to step)?

int status = sqlite3_backup_step(bck, -1);

Or if you really want to call sqlite3_backup_step() incrementally (X pages at a 
time), then do BEGIN [DEFERRED] [TRANSACTION] first.

Your backup copy should then be allowed to proceed from start to finish without 
impacting readers and writers, nor being impacted by them.  You shouldn't see 
any restart.  Your backup will be a representation of the database as it was 
when the single (or first) sqlite3_backup_step() call started.

Background:

https://www.sqlite.org/c3ref/backup_finish.html#sqlite3backupstep

"Every call to sqlite3_backup_step() obtains a shared lock on the source 
database that lasts for the duration of the sqlite3_backup_step() call."

I can't comment on buffering issues, flushing things and so on when outputting 
through a custom VFS.
—  
Best Regards, Meilleures salutations, Met vriendelijke groeten, Mit besten 
Grüßen,
Olivier Mascia



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Large database backup

2019-08-01 Thread Tammisalo Toni
Hi!

I have an application which is using sqlite database in WAL mode. There is a 
need for
periodic backups to a remote site without obstructing the normal operation. 
Both read 
and write access is required during the backup. At the moment I have system 
which 
first blocks checkpoints as otherwise backup was restarted too often. Backup is 
done 
to remote database implemented with sqlite vfs layer.

What happens in my tests is that all sqlite3_backup_step()'s complete without 
actually
writing anything to remote database. Only during last step all pages are 
written. This would
be annoying from progress monitoring point of view. However, as database mutex 
is held 
during this time it actually blocks all access to the database for a long 
period of time.
Changing the backup step size does not help as all this happens at last step 
regardless.

So, is this a bug? I'm I doing something wrong? What I was hoping was that 
backup would
not hold database mutex while it is writing to the other database or at least 
allow splitting the
work with sqlite3_backup_step() so that at least some work could be done while 
backup is
in progress. I actually have strong impression that this worked better with 
some older sqlite
version. Currently using 3.27.2. 

Relevant part of the stack trace:

#8  0x005dc870 in sqlite3OsWrite (id=0x7fc1a4120f98, 
pBuf=0x7fc1a47b0e88, amt=, offset=)
at sqlite3.c:9
#9  pager_write_pagelist (pPager=0x7fc1a41216f8, pList=0x7fc1a47c0ec0) at 
sqlite3.c:54971
#10 0x005bb1a5 in sqlite3PagerCommitPhaseOne (pPager=0x7fc1a41216f8, 
zMaster=0x0, noSync=0)
at sqlite3.c:57050
#11 0x005b968f in sqlite3_backup_step (p=0x7fc1a4056658, 
nPage=) at sqlite3.c:74033

Seems that all database pages are written out in pager_write_pagelist() in 
single loop.

Also, I'm open to other suggestions. I was contemplating to just copy the 
database file 
directly while WAL checkpoints are not done but I read some comments that 
suggested
that it would not be a safe to do that.

Any help or suggestions would be appreciated!

  Toni Tammisalo
  ttamm...@iki.fi
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users