On Wed, Aug 7, 2013 at 6:36 PM, Richard Hipp <d...@sqlite.org> wrote:

> On Wed, Aug 7, 2013 at 12:15 PM, Dominique Devienne <ddevie...@gmail.com
> >wrote:
>
> > On Wed, Aug 7, 2013 at 2:45 PM, Clemens Ladisch <clem...@ladisch.de>
> > wrote:
> >
> > > Dominique Devienne wrote:
> > > > We can of course copy the db file somewhere else with r/w access, or
> > copy
> > > > the DB into an in-memory DB (for each table, create table memdb.foo
> as
> > > > select * from dskdb.foo) and upgrade and read that one instead, but I
> > was
> > > > wondering whether there's another better solution we could use?
> > >
> > > You can use the backup API to copy an entire database at once:
> > > <http://www.sqlite.org/backup.html>
> > >
> >
> > Thanks. That's more efficient and less code for sure, and what we're
> using
> > now.
> >
> > I just thought there might be a different trick possible to avoid
> > duplicating the whole DB, like forcing the journal to be in-memory, or
> > using WAL instead, or something.
> >
> > If that's the best we can do, then so be it.
> >
>
> That's probably about the best that is built-in.  However...
>
> You could write a "shim" VFS to fake a filesystem that appears to provide
> read/write semantics but which really only reads.  All writes would be
> stored in memory and would be forgotten the moment you close the database
> connection.
>

Thanks for the suggestion Richard. I didn't think of that, and that would
indeed reduce our memory footprint.

We'll keep using the backup API for now, in our patch release, but will
definitely investigate using a VFS in the future instead, for our next
feature release.

Thanks again, --DD
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to