On Fri, Nov 11, 2011 at 00:52:23 +0100, Fabian wrote:
> 2011/11/10 Richard Hipp <d...@sqlite.org>
> > Because when you are inserting the Nth row, SQLite has no idea of how many
> > more rows will follow or how big the subsequent rows will be, so it has no
> > way to reserve contiguous space sufficient to hold them all.  The result is
> > that parts of the table and parts of the indices become interleaved in the
> > file.
> >
> But nothing is written to disk until I call commit (i'm using
> journal_mode=memory), so when SQLite has to start writing the data, it
> knows the exact total number of rows, and also that no other rows will
> follow.

That's not how journals work. Or rather, it is the way "wal" journal works.
All the other journal modes, inlcuding "memory", work by writing the data
directly to the database and storing information needed to return the
database to the last consistent state in the journal.

> But then again, maybe the format of the journal in memory, is an exact copy
> of the bytes it will write to disk, and in that case I understand that it
> would be very inefficient to start shuffling things, instead of just
> dumping it. I pictured it like a temporary table, in which case it would be
> fairly easy to restructure things before writing.

No, the journal does not contain the bytes that are going to be written to
disk at all, ever. It contains the bytes that were on the disk before. Than
the transaction is committed by simply deleting the journal and rolled back
by writing the content of the journal back into the file.

The only difference is the write-ahead log, "wal", journal mode, that was
introduced in version 3.7. In that case the the journal contains something
like "patches" to be applied to the database. Transactions are committed by
just marking the data valid in the journal and there is a special
"checkpoint" operation that actually writes the data to the database file
itself. It has the advantage that readers are not blocked by write and is
often faster for small transactions, but it does not handle huge transactions
(at $work we use sqlite to process huge datasets where one transaction often
writes table with several million rows; wal is not much use in such context)

-- 
                                                 Jan 'Bulb' Hudec <b...@ucw.cz>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to