PtrMap pages may be too much overhead in my case, I have occasionally run
vacuum
on same databases to see the effect, and it was not very significant.

This is likely because the databases are heavily skewed towards inserting
(and indexing)
data than about update/delete, and while the tables are quite fragmented,
since I am using SSDs,
the gains from a vacuum defragmentation appears marginal.

Eric



On Mon, Jul 30, 2018 at 10:33 AM, Dominique Devienne <ddevie...@gmail.com>
wrote:

> Oops, sent too early...
>
> On Mon, Jul 30, 2018 at 10:29 AM Dominique Devienne <ddevie...@gmail.com>
> wrote:
>
> > On Mon, Jul 30, 2018 at 10:11 AM Eric Grange <egra...@glscene.org>
> wrote:
> >
> >> @David Yip
> >> > If you have the dbstat module enabled (-DSQLITE_ENABLE_DBSTAT_VTAB),
> you
> >> > can query the dbstat virtual table for the number of pages used per
> >> table and index, e.g.
> >>
> >> Thanks, I did not know about that module, however it is a bit slow, on
> a 7
> >> GB database I have, it takes almost 5 minutes, and I have larger
> >> databases :/
> >> So it's definitely something that would have to run in the background or
> >> night.
> >>
> >
> > Well, you probably already know that in SQLite, there's a 100 bytes
> > header, then N fixed-sized pages,
> > where the page size is configurable from 512 bytes to 64KB by powers of
> 2.
> > The first page contains
> > the sqlite_master table, which references the "root" page of each table
> > and index. But that's it. To know
> > how many pages each table/index uses, SQLite (or any other tool) will
> need
> > to read each object's root page,
> > and traverse the b-tree of pages rooted at that first page (and their
> > overflow pages). Then repeat for each object.
> > Thus in most cases, all pages of the DB will need to be "paged", i.e.
> > read, so that's a lot of IO, so not quite fast.
> >
> > If your DB has auto or incremental vacuum configured, you've have special
> > PtrMap pages.
> >
>
> See https://www.sqlite.org/fileformat.html#pointer_map_or_ptrmap_pages
>
> Perhaps thanks to those, you'd be able to figure how many pages per object
> faster,
> from just those PtrMap pages, and the root pages from sqlite_master. But
> then you'd need
> to figure this out yourself. And accept the price of SQLite
> maintaining/updating those pages
> for you during "normal" DMLs. I don't know what the overhead is exactly, on
> average. But it
> can't be faster than not maintaining those pages for sure :). FWIW.
>
>
> >
> > @D Burgess
> >> > download sqlite3_analyzer
> >>
> >> Thanks, the text output is interesting with built-in documentation.
> >
> >
> > If I recall correctly, sqlite3_analyzer's output is based on the dbstat
> > vtable, so it's unlikely to be faster I assume. --DD
> >
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to